The Formation Problem

The Formation Problem

Photo by NEXT Academy on Unsplash

A previous post explored the gap between what domain experts know and what AI systems can use. The argument was about getting tacit knowledge into AI.

This one is about what happens to the humans who develop that tacit knowledge in the first place.

There’s no shortage of tech layoff headlines. But underneath the visible cuts, something quieter is happening that I think matters more in the long run: companies stopped hiring junior engineers. Entry-level roles in AI-exposed occupations dropped 14% for workers aged 22-25 between 2024 and 2025, according to Anthropic’s labor market analysis. The layoffs make the news. The roles that never get posted don’t.

That distinction matters. A firing is visible. It generates pressure to respond, to retrain, and to find alternatives. A hiring freeze on junior roles is invisible. No one protests their own un-hiring. The jobs that aren’t posted don’t show up in unemployment data. The pipeline drains quietly, and the problem doesn’t announce itself until years later.

What Junior Developer Work Was Actually Doing

Software teams have always had work that experienced engineers didn’t want to do and junior engineers got paid to learn from. Tracking down a bug across a sprawling codebase. Writing the boilerplate that connects system A to system B. Building the documentation nobody reads until something breaks. Fixing whatever went wrong at 2am.

Senior engineers called it scut work and junior engineers called it their job. Both were right and both missed what was actually happening.

That work was the formation process.

Debugging a gnarly bug in an unfamiliar codebase teaches pattern recognition that no classroom can get at. You learn, through direct experience, where things break at scale, why certain architectural decisions look fine until they don’t, what “this will cause a problem later” feels like before it becomes a problem. You develop a model of the system that lives in your hands as much as your head.

Writing boilerplate feels mechanical, but It isn’t. It forces you to understand the interfaces between systems: what they expect, what they return, where the contracts are implicit and where they’re explicit. Writing documentation forces you to understand what you actually know versus what you assumed. Fixing the 2am incident teaches you what matters under pressure.

None of this is efficiently teachable. It accumulates through repetition, in the context of actual systems with actual stakes. The experienced engineers on your team who can supervise AI output and catch what it gets wrong? They got there by doing this work for years. There was no shortcut.

AI coding assistants now generate boilerplate, write integration glue, draft documentation, and handle a growing share of routine debugging. GitHub reported in early 2025 that Copilot was generating nearly half the code in repositories where it was enabled. The work that remains for junior engineers is increasingly review and prompting, not the hands-on building that formed their predecessors.

What the Early Research Suggests

The data on this is still emerging, but what exists points in a consistent direction. METR’s controlled trial on AI-assisted development found that experienced developers who believed they were 20% faster were actually 19% slower. Anthropic’s labor market analysis found the 14% drop in entry-level hiring I mentioned above. And early studies of junior engineers using AI coding assistants are showing mastery deficits, with the largest gaps in debugging; precisely the skill needed to verify what AI produces.

The pattern across these studies isn’t that AI degrades existing skills. It’s that AI removes the work that built those skills in the first place. The formation happens through the struggle. Take away the struggle, and the formation doesn’t happen.

This matters beyond junior engineers because debugging isn’t a junior skill. It’s foundational for working with AI at all. Any engineer who’s going to supervise AI output, catch errors before they compound, and know when the AI is confidently wrong needs the kind of pattern recognition that debugging builds. AI is automating away the apprenticeship that produces the people capable of using AI well.

Part of me wonders if anyone building these tools has thought through that particular loop.

The Raspberry Pi Moment

In the 1980s, the path into computing ran through hardware you could open and modify. You built the machine. The machine taught you. The programmers who came out of that era developed foundational mental models of how computation worked at a physical level and what the abstractions were abstracting. This shaped how they thought about software for decades.

The 1990s sealed that path. Consumer computers became appliances. The Xbox wasn’t a platform for building things; it was a platform for playing things. Universities started noticing a shift in the early 2000s: incoming students who’d never written a line of code, who had no intuition for what happened inside the machine, who needed to be taught from scratch concepts that earlier cohorts had absorbed through tinkering.

The response was the Raspberry Pi. Designed explicitly to recreate accessible entry points. A machine cheap enough to break, open enough to modify, close enough to the hardware that you could feel what was happening. A deliberate intervention to restore a formation pathway the market had inadvertently closed.

We’re at an equivalent inflection point with AI, and there’s no Raspberry Pi equivalent yet.

AI handles the work that formed junior engineers the same way consumer appliances handled the work that formed early programmers. The work still gets done but the formation doesn’t happen. The people who would have learned through that work end up on the other side of the transition with a different set of skills and a different set of gaps.

The hardware transition played out over a decade before universities noticed and the Raspberry Pi emerged as a response. AI coding assistance went from novelty to default in about 18 months. The formation pipeline is draining faster than the last one did, and there’s less time to notice before the consequences arrive.

The Pipeline Consequences

Senior engineers who can supervise AI well; those who know when the code is wrong, when the architecture is subtly off, when the test passes but the system will fail in production, got there through years of the work that AI is now doing. Their judgment is the accumulated residue of thousands of debugging sessions, thousands of integration problems, thousands of incidents. It’s tacit knowledge that transfers through apprenticeship, not documentation. And it’s the same kind of hard-won wisdom that separates knowing what the code does from knowing whether it should.

New engineers learn it by working alongside senior engineers on hard problems, by making mistakes that senior engineers help them diagnose, by building pattern recognition that only comes from exposure to many instances of the same class of problem.

If junior engineers stop doing the work that builds that pattern recognition, they don’t arrive at the senior level with the judgment senior engineers currently have. They arrive with something different; faster in certain ways, but with gaps in exactly the places that matter most for supervising AI. It’s the same dynamic I wrote about in When AI Automation Erases Competitive Advantage : the thing that differentiates gets optimized away before anyone realizes it was load-bearing.

The 14% drop in entry-level hiring isn’t an abstraction. It represents a cohort of engineers who didn’t have a formation experience, working their way toward seniority. If nothing changes, in a few years, those engineers will be the senior people on your team. They’ll be the ones deciding whether to trust what the AI produced.

A Different Kind of Problem

The knowledge extraction problem from the previous post is hard, but it’s a process problem and interventions are available.

The formation problem is different. Nobody decided to hollow out the junior pipeline. It’s just what happened when a lot of teams made the same reasonable budget call at the same time.

In a nutshell: we’re using AI to skip the work that produces the people who know how to use AI.

Share

Get weekly insights on technology leadership

One idea per issue. No spam. Plus a free guide on measuring AI initiatives when the old metrics don't work.

Or download the free guide directly →