The Knowledge Problem

The Knowledge Problem

Photo by Gabriella Clare Marino on Unsplash

I keep coming back to a RAND Corporation finding from last year: roughly 80% of AI projects fail to deliver their intended outcomes. That’s twice the failure rate of conventional IT projects. MIT’s 2025 research on generative AI is grimmer — 95% of GenAI pilots never make it past the pilot stage.

I think those numbers should bother people more than they do.

Every time I bring them up, the conversation goes straight to the usual explanations. Bad data. Lack of executive buy-in. Not enough AI talent. I’ve given those answers myself. They’re not wrong, exactly, but they don’t explain why the same pattern keeps showing up across industries, across company sizes, across every kind of AI project. There’s something structural going on underneath, and most companies aren’t looking at it.

Here’s what I think it is.

Every organization runs on knowledge that lives in people’s heads. The maintenance tech who hears something in a bearing and knows it’s about to fail before any sensor fires. The underwriter who reads a deal file and gets a feeling something’s off and she’s right far more often than her formal process can account for. The operations manager who routes work based on patterns he’s never had to write down because he’s never had to.

That knowledge is real and valuable. It’s also completely inaccessible to an AI system. Algorithms don’t work from intuition or experience. They work from data, rules, labeled examples, documented decision logic. To use what your experts know, you have to get it out of their heads first; and that turns out to be genuinely hard to do. Most organizations don’t do it well. I’d argue most don’t really try.

That’s the gap where AI projects go to die.

The failure pattern is predictable enough that I’ve started thinking of it as a sequence that goes something like this:

A company decides to implement AI. They bring in a vendor or a data science team. The team spends a day or two interviewing domain experts, asks them to describe their process, and calls that the requirements phase. Then they build. The AI produces outputs the domain experts don’t trust, because the model missed all the contextual nuance nobody captured in those two days of interviews. The experts go back to their old way of working. Management mandates use of the tool. Resistance builds. The project dies.

What gets me about this pattern is that the technology usually works. The data is often adequate. The failure happens at the knowledge extraction step, and everything downstream inherits that problem.

I’ve watched this play out in financial services, where firms built underwriting models their own underwriters wouldn’t touch. In manufacturing, where predictive maintenance tools sat unused because the floor techs didn’t trust them. In healthcare, where physicians quietly routed around AI-assisted workflows. In most of those cases the AI was technically functional. It just didn’t know what the people knew.

This problem hits mid-market companies particularly hard (firms in roughly the $50M to $500M range) for three reasons that reinforce each other.

The first is knowledge concentration. A Fortune 500 might have twenty experienced underwriters. A 150-person company has two. When your entire AI project depends on what one or two people know, you’re betting on their availability, their willingness to sit down and talk through it, and their ability to articulate what they do. All three are harder to come by than most project plans assume.

The second is that there’s no slack. The people who hold the critical knowledge are also the people running the operation. Every hour they spend with the AI team is an hour they’re not doing their actual job. Large enterprises can backfill that time. Mid-market firms mostly can’t.

The third is that knowledge-sharing at this scale tends to be informal. People learn by sitting next to someone who knows how to do it, asking questions in the hallway, watching the experienced person work. That’s efficient for day-to-day operations but it’s a liability for AI, which needs structured, formalized knowledge; the kind that can actually be processed computationally. Making that leap is a genuine organizational change, not a technical task, and most mid-market firms aren’t set up for it.

These three things don’t just add up — they’re multiplicative. No slack means no time to formalize. Informal practices keep knowledge concentrated in a few people. Concentration creates single points of failure. The project that looked like a technology challenge turns out to be an organizational design challenge wearing technical clothes.

The companies I’ve seen get this right share one thing: they treat knowledge extraction as the primary work, not the precondition. Before anyone evaluates platforms or vendors or considers a data science hire, they figure out whether the organization can actually articulate the knowledge AI would need. Can your domain experts walk through their decision-making in detail? Not just what they do, but why, and what they’re weighing? If the answer is uncertain, that’s the first project.

When they do knowledge sessions, they don’t run interviews. They walk through specific real cases: what did you notice, what did you check, what alternatives did you consider, why did you decide what you did. The exceptions are especially worth digging into, because that’s where the most valuable knowledge tends to concentrate. This takes time and can’t be compressed into a couple of afternoons.

Most of the successful implementations I’ve seen also had someone who could work both sides of the room — to listen to a domain expert describe their process and know what needed to be formalized, and to read a technical spec and flag where the domain logic was wrong. In larger companies this role sometimes gets called an AI translator. In a mid-market firm it might be your most technically literate operations person, or your most business-minded engineer. Either way, they need actual dedicated time because tacking this onto an existing job doesn’t work.

I think technology selection is where most companies want to start because it feels like the real work. But you can’t build a useful AI system on knowledge you haven’t captured yet.

If that’s where you’re stuck, I’d be glad to think through it with you: [email protected]

Share

Get weekly insights on technology leadership

One idea per issue. No spam. Plus a free guide on measuring AI initiatives when the old metrics don't work.

Or download the free guide directly →