The Exposure Gap

The Exposure Gap

Anthropic published a research paper last week that tries to answer a question most AI discussions skip entirely: how much of the theoretical AI capability is anyone actually using?

The paper is called “Labor market impacts of AI: A new measure and early evidence” by Maxim Massenkoff and Peter McCrory. They built a metric called “observed exposure” that combines what AI can theoretically do with what people are actually using it for, weighted by whether the usage is automated or work-related. It’s a more honest measurement than most of what I’ve seen, because it separates “could AI do this task” from “is anyone actually using AI for this task.” I wrote about a version of this disconnect in The AI Value Gap and found that 78% of companies claim AI adoption but only 4% see real value. This research gives that gap a much sharper edge.

What the Data Shows

The researchers combined ONET occupation data (roughly 800 occupations), Anthropic’s own Claude usage data, and earlier task exposure estimates to build a picture of where AI is landing in the real economy.

Computer programmers show about 75% task coverage; meaning AI tools are being used for about three-quarters of their measurable work tasks. Customer service representatives are close behind with data entry keyers at 67%. These are the occupations where AI has gone from theoretical to operational.

Then there’s the other end. About 30% of workers have zero AI task coverage. Cooks, mechanics, bartenders; jobs where the work is physical, contextual, and doesn’t flow through a screen. No amount of prompt engineering is going to change a brake pad.

The workers with the highest AI exposure tend to be older, female, more educated, and higher-paid. They earn about 47% more on average than workers with low exposure. Graduate degree holders make up 17.4% of the exposed group versus 4.5% of the unexposed. AI usage is concentrated in knowledge work, not manual labor. That doesn’t mean knowledge workers are losing their jobs though; the paper says they’re not, at least not yet. But it does mean that if displacement eventually comes, it’ll hit higher-paid, more educated workers first. That flips the common assumption about who’s most at risk.

The Employment Question (So Far)

The paper’s most interesting finding is also its most cautious: there’s no systematic increase in unemployment among highly exposed workers since late 2022. The jobs most affected by AI capability aren’t disappearing. At least not yet.

There is one signal worth watching. Job finding rates for younger workers in highly exposed occupations dropped about 14% after ChatGPT launched. The researchers note this is barely statistically significant, and they’re careful not to overclaim. But it fits a pattern I keep seeing in conversations with hiring managers: when AI coverage for an occupation is north of 50%, the calculus on whether to hire a junior person changes. You don’t eliminate the role; you just don’t backfill it as quickly. There’s also a cognitive dependency angle here; research I covered in Balancing Human Thought and AI Assistance shows younger workers already demonstrate higher AI dependence and lower critical thinking scores. If you’re not hiring them and the ones working are offloading more thinking to the tools, the junior talent pipeline has a compounding problem.

BLS employment projections through 2034 show weaker growth in occupations with higher AI exposure. That’s a slow squeeze, not a cliff. The jobs don’t vanish overnight but the hiring pipeline narrows, the roles shift, and five years later the headcount is lower without anyone having made a dramatic announcement about layoffs. The risk I wrote about in When AI Automation Erases Competitive Advantage applies here too: when fewer junior people enter the pipeline, you eventually lose the bench that develops into your senior talent. The knowledge gap doesn’t show up now., but shows up in five years when you need experienced people and the pipeline was thinner than you realized.

Why This Matters for Planning

Most organizations I’ve talked to about AI strategy are working with one of two mental models: either AI is going to transform everything (so we need to move fast) or it’s mostly hype (so we can wait). This research suggests a third reality that’s harder to plan around.

AI adoption is uneven, concentrated in specific occupations, and moving at different speeds across different kinds of work. The paper found that most of what people use Claude for falls within tasks researchers had already flagged as feasible; people aren’t pushing AI into territory it can’t handle. But only a fraction of those feasible tasks have meaningful real-world adoption. The capability frontier is wide while the actual usage frontier is narrow. That’s the exposure gap, and it means the transformation is real but patchy.

For anyone trying to build workforce strategy around AI, the “observed exposure” metric is more useful than the broad capability assessments most vendors are selling. Knowing that an AI tool could theoretically automate a task is interesting. Knowing whether anyone in your industry is actually using it for that task is actionable. And if you’re still trying to measure what AI is worth to your organization, the productivity measurement problem I wrote about recently hasn’t gone away; METR’s study collapsed because developers refused to work without AI. You can’t cleanly measure the impact of something people won’t stop using.

The paper is worth reading in full if you’re responsible for technology or workforce decisions. The methodology is transparent, the claims are appropriately hedged, and the data is more granular than anything else I’ve seen on this topic. It’s Anthropic studying its own product’s impact on the labor market, which makes it both unusually well-informed and worth reading with that context in mind.

The gap between what AI can do and what AI is doing is where all the interesting strategic questions live right now.

Share

Get weekly insights on technology leadership

One idea per issue. No spam. Plus a free guide on measuring AI initiatives when the old metrics don't work.

Or download the free guide directly →