The Developer Mentoring Crisis

The Developer Mentoring Crisis

Photo by Abu Saeid on Unsplash

A post on r/ExperiencedDevs caught my attention last week. The question was:

“Whatever happened to just asking questions at work?”

The developer had walked into a new job and found nobody to ask because everyone was buried in velocity targets.

The responses were interesting. Senior developers describing junior colleagues who’d rather stay stuck for hours than interrupt someone’s flow. People describing teams where asking a question feels like admitting failure and organizations where mentoring only exists in formal programs nobody has time for. At some point, we decided developer productivity meant ticket velocity, and this is what that looks like.

Velocity Ate Mentoring

Most engineering organizations still measure individual developers by story points closed, tickets resolved, or commits pushed. A 2025 DORA study found 42% of teams admit to gaming these numbers when they’re tied to reviews. More output means more productivity on a dashboard. It tells you nothing about what’s actually being built or whether the team is getting better.

When a senior developer stops to explain why a particular architecture decision was made, zero tickets get closed. When someone reviews code thoughtfully and explains the reasoning behind their feedback, velocity numbers don’t move. The conversations where context gets transferred, where someone explains “here’s why we did it this way and here’s what we tried first,” none of that shows up in sprint reports.

So those conversations stopped. Nobody decided to end them. The incentive structure just made them impossible to justify. The senior developer who used to mentor now handles release management, deployment coordination, and QA verification on top of closing tickets. No time left for the hallway explanations that actually build people.

What That Costs

When mentoring disappears, something specific is lost: judgment. Junior developers can use AI tools to generate code, but they can’t debug when those tools produce something that breaks in their specific context. They know how to prompt ChatGPT for a function but don’t understand why one architecture creates problems six months later while another doesn’t. That kind of understanding doesn’t come from documentation. It comes from watching experienced people work through hard problems and hearing them explain their reasoning out loud.

The pattern looks a lot like what happened in manufacturing during the 1980s and 1990s. Companies drove so hard on efficiency metrics that they eliminated informal knowledge transfer on factory floors. Experienced machinists who understood subtle material variations, early warning signs of equipment failure, workarounds for design flaws, all of that walked out when those workers retired. Nobody had written it down because nobody had been asked to.

Software development is repeating this. The senior developers who’ve seen how decisions compound over time, who know which patterns fight each other six months later, aren’t passing any of that on. No time. No incentive. And the 42% gaming their velocity numbers aren’t going to self-report a mentoring gap.

What you get is an organization that executes known solutions efficiently and adapts to new problems poorly.

Measuring What Actually Builds Capability

Productivity measurement isn’t the problem. The problem is that most organizations only count output. They don’t ask the questions that tell you whether a team is building capability:

  • How many junior developers become productive contributors within six months?
  • How quickly do new team members gain context on existing systems?
  • How well does the team handle a problem nobody has seen before?
  • When a key person leaves, how much institutional knowledge walks out with them?

Those questions point toward different incentives. Count mentoring hours in sprint capacity. Recognize that a senior developer who helps three people get unstuck contributes more than someone closing twice the tickets alone. Treat code review as teaching, not gatekeeping.

This matters more now than it would have five years ago. AI tools are getting better at generating standard code , which means speed is becoming table stakes . The teams that pull ahead will be the ones who can handle what AI can’t: novel integrations, architectural trade-offs, problems that require understanding how systems interact over months and years. Those are judgment calls, and judgment is exactly what disappears when nobody has time to mentor.

The senior developers who could transfer that context are still around in most organizations. They’re just buried under release coordination and QA and deployment support. The incentive structure made mentoring invisible, and invisible things don’t get done.

Share

Get weekly insights on technology leadership

One idea per issue. No spam. Plus a free guide on measuring AI initiatives when the old metrics don't work.

Or download the free guide directly →