There Has Never Been a Better Time to be a Junior Developer - And It Won't Last Forever
AI as Mentorship-as-a-Service
Everyone in tech is convinced that AI will eliminate junior developers first. “Why hire a junior when AI can write code?” they ask. The prevailing wisdom is that entry-level developers are most vulnerable to automation.
They’re dead wrong.
I wrote “The Future of AI Belongs to Experienced Operators with Good Taste” a few months back and that’s still absolutely true. But there’s a massive plot twist most people are missing: AI coding assistants aren’t just productivity tools - they’re the great equalizer.
Junior developers with the right attitude and intentional practice are positioning themselves to outpace expensive, slow senior developers who refuse to adapt. The window is open right now, but it won’t stay open forever.
The Demise of “Slow Senior” Developers
What’s about to change inside the software industry is the velocity, volume, and variety of software that individual developers can produce. AI coding assistants allow us to translate our general programming experience and knowledge to new domains.
With the help of Claude 3.5, for instance, I was able to apply my decades of C# / .NET experience to writing 3D animations in JavaScript. Sure, I needed to know some basic JS - but it was mostly helping the agent organize the code so scenes were animated uniformly without lots of artifacts, scaling problems, and so on.
More recently, using Claude 4.0 and some self-hosted models like qwen3-coder-instruct
, I’ve been able to ship self-updating .NET AOT CLIs, major updates to Sdkbin’s Entity Framework Core and ASP.NET Core infrastructure, and more.
It’s worth making an important distinction here. I’m still reviewing this code. I’m still applying my tastes very carefully to all of it. I’m scrutinizing it. I’m the one debugging it. And I’m the one authoring tests for a lot of this code. But cutting out months of drudgery on each of these projects by leveraging AI coding tools matters. Speed matters. Getting product to market matters, above all else.
AI-coding tools are extraordinarily transformative in how they shift engineering focus from how things should be done to what actually gets done. I don’t need to have an overpaid senior engineer spend two months researching how to build a UI for an internal tool when I can simply ask an AI to prototype it in a matter of minutes.
There are thousands of largely pointless software development tasks like this at any given time. At the end of the day, no one really cares whether a UI was made with React, Knockout.js, jQuery, or whatever, as long as the tool works. Having all of those accelerated adds up to significant productivity improvements for the organization as a whole.
Here’s the dirty secret everyone’s missing: AI isn’t coming for junior developers. It’s coming for expensive, slow senior developers who refuse to adapt - the high-cost, low-output JIRA jockeys who don’t produce very much.
And JIRA jockeys are legion throughout the software industry as a whole - droves of “senior” developers who’ve been in-role for 10-15 years and haven’t evolved very far past a lot of the skills they learned their first one to five years on the job. Sheer inertia and the managerial fear of specialized domain knowledge walking out the door are what have kept these people safe and grossly overpaid for quite some time.
In many organizations this is about to change and output rather than process will become the rightful king, as it always should have been.
Other organizations led by the usual BigCo risk-averse laggards and MBAs will probably continue to run the same inefficient, less profitable playbook with demonstrably worse outcomes for customers until market realities overcome bureaucratic inertia, however long that may take.
Quality vs. Velocity of Output
LLMs help tremendously with improving developer velocity, even with non-coding tasks. For instance:
- Transforming chats in Discord / Slack / Teams into bug reports;
- “Steering” - helping developers pick a direction to triage a bug or research a fix;
- Reverse-engineering / diagramming legacy code; and
- Hundreds of other examples.
But of course - the real raison d’être of AI coding tools is to produce code. LLMs are getting astonishingly better at doing this well, but they’re far from perfect.
We just shipped some major changes for our Sdkbin rewrite this week - and Cursor / Claude Sonnet 4 was very helpful in major facets of this project:
- Wrote all of the EF Core data migration code for our sanely-designed schema;
- Wrote the Playwright tests to do UI testing for areas of our app that can’t easily be tested any other way;
- Used my
mssql-mcp
MCP server for Microsoft SQL to find edge cases in our dev/qa/production environments not accounted for in our migration code; and - Rewrite entire portions of our UI to support new business requirements / data access layer changes.
You know what Claude was totally useless for? Designing the new schema itself or implementing other “architectural tastes” like value objects consistently. I had to do that. And that’s the way it’s going to be, for quite some time - LLMs produce output faster than humans, which means they produce mistakes faster too! And mistakes compound!
Why can AIs code for 1h but not 10h?
— Benjamin Todd (@ben_j_todd) June 15, 2025
A simple explanation: if there's a 10% chance of error per 10min step (say), the success rate is:
1h: 53%
4h: 8%
10h: 0.002%@tobyordoxford has tested this 'constant error rate' theory and shown it's a good fit for the data
chance of… pic.twitter.com/wPixwpGOaf
Keeping the quality of output high requires experience and active engagement with the LLM. In any non-trivial application the agent needs to be coding with you, not instead of you.
This is where leveraging LLMs can be very instructive even for people who’ve been in the software industry for a long time. LLMs allow you to rubber duck a lot of your design ideas, user stories, implementation plans, technical research, and they’re very, very useful at breaking you out of the analysis paralysis trap, which is one of the biggest destroyers of productivity for software developers.
LLM Mentorship-as-a-Service
But where I see the real value in leveraging large language models is for trying to acquire new technical skills altogether. For instance, if I wanted to learn CSS from the ground up, something that I have not done since I was 10 years old back when it was brand new, I could reliably pair with a large language model and build my own CSS grid management system from scratch if I wanted to understand the fundamentals and improve as a web developer.
Lately, in my own day-to-day work, I’ve been leaning on LLMs to help me learn:
- How to get AOT compilation working for .NET, an area that’s relatively new to me despite my decades of .NET experience.
- Create Python applications for self-hosting large language models using “bare metal” technologies like llama.cpp - I’ve learned a lot about how Python, HuggingFace, PyTorch / other inference providers like llama.cpp, PyPi et al work as a direct result of this - as it’s been a long time since I’ve written any Python by hand.
- Funnily enough, writing desktop applications. I’ve shipped two written in Python1 2 since switching my full-time desktop at home to Ubuntu from Windows.
Here’s the interesting thing about LLMs. You could certainly make the argument I would have learned a lot more by grinding it out and building all these applications by hand. But the overhead of actually developing any of these applications by hand, in terms of time-value, was so prohibitively large that I would have not bothered in the first place. Large language models trivialize the marginal cost of working on side projects and broaden your horizons in the process. A win-win for everybody except for the AI doomers.
Your Unfair Advantage Starts Right Now
If you’re a junior developer or trying to break into tech, you’re about to witness the greatest opportunity transfer in software history.
While expensive senior developers are busy protecting their turf and complaining about AI on Twitter, you have something they’ve lost: hunger and adaptability. You’re not weighed down by 15 years of cargo cult programming and bureaucratic scar tissue.
You can do what most people do - sit around waiting for someone to give you permission to succeed. Or you can recognize that AI just handed you a cheat code to skip past years of corporate ladder-climbing and start shipping real software immediately.
Where the puck is going as far as large language models and the employment market go is ultimately people who can get things shipped are going to win. What you want to accumulate is experience successfully shipping things: learning from failures, talking to customers, learning how to actually iterate and build strong, robust processes that will help you deliver software at an even faster rate down the road. LLMs give you a judgment-free, permission-free, and as of today, a very low-cost pipeline for accumulating all this experience if you choose to take it. So stand up and do it.
The Window Is Closing
The current AI subsidy party won’t last forever. We’re in the Uber-circa-2015 phase where VCs are bleeding money to buy market share:
Every dollar these companies make costs them $1.25 to generate. Your $20/month Cursor subscription is heavily subsidized by investors betting on future dominance. When the land grab ends, prices will skyrocket.
This is your moment.
While complacent seniors debate whether AI will “replace developers” in LinkedIn comment threads, smart juniors are using AI to become unstoppable shipping machines. They’re accumulating real experience, building real products, and developing real taste at unprecedented speed.
The great reversal is happening right now. The question isn’t whether you’ll participate - it’s which side you’ll be on when the dust settles.
Choose wisely. The window won’t stay open forever.
-
https://github.com/Aaronontheweb/witticism - WhisperX-powered voice transcription tool that types text directly at your cursor position. Hold F9 to record, release to transcribe. ↩
-
https://github.com/Aaronontheweb/ubuntu-elgato-facecam - a Linux tray application aimed at making it easy to work with Elgato Facecam. Believe it or not, this was easier than working with OBS. ↩