Newsletter

The Danger of "Everyone Knows"

Black and white photograph of towering sandstone cliffs in Utah partially obscured by fog and low clouds, with the base of the formation hidden from view.

When the fog rolls in, you can see the cliff but not the edge. Technology consensus works the same way.

The most expensive mistakes I’ve seen executives make start with the same phrase: “Everyone knows that…”

Everyone knew you needed to be cloud-first. Everyone knew you needed a Chief Digital Officer. Everyone knows you need an AI strategy.

When a belief becomes so widely accepted that questioning it feels foolish, you’re standing near a cliff. Not always at the edge, but close enough that the wind should make you nervous.

In markets, researchers have studied what happens when sentiment becomes unanimous. Most of the time, there’s a healthy mix of bulls and bears. People disagree. That tension creates stability. But occasionally—maybe a few times per decade—consensus becomes nearly complete. The last skeptics give up. The holdouts capitulate. Everyone agrees on what’s obviously true.

That’s when things get expensive.

In 2000, everyone knew the internet had changed everything, and old economy stocks were dead.

In 2008, everyone knew housing prices only went up.

In 2021, everyone knew interest rates would stay low forever.

The internet did change things. Housing had been rising. Rates had been low for years. The underlying observations were correct. The conclusions drawn from them, and the timing of those conclusions, were catastrophic.

When everyone agrees, there’s no one left on the sidelines to push the trend further. The last buyer has bought. The last seller has sold. What comes next is mean reversion.

This pattern shows up beyond markets. It shows up in technology decisions. And it shows up in the strategies that land in your boardroom with phrases like “we can’t afford to fall behind.”

Consider what happened around 2015, when the message became clear: you need to move to the cloud. Every analyst report said so. Every vendor presentation said so. Every board member who’d read a trade magazine said so. Companies rushed to migrate. Not because they’d done rigorous cost-benefit analysis, but because falling behind felt riskier than moving forward.

A 2019 Fortinet study found that 74% of companies moved applications back on-premises from the cloud after failing to achieve expected returns. They’d migrated, realized their costs had increased rather than decreased, discovered their applications weren’t suited for cloud architecture, and reversed course.

Separately, Cloud Security Alliance research found that 90% of CIOs had experienced failed or disrupted data migration projects. The average project took 12 months, contrary to original estimates that were much shorter.

Cloud computing worked. The technology was real, and many companies benefited from it. But the unanimous conclusion that every company should migrate everything as quickly as possible led to rushed decisions and costly reversals. The consensus wasn’t wrong about cloud. It was wrong about urgency and universality.

The same dynamic appeared in the Chief Digital Officer role. By 2015, the CDO had become the trendiest C-suite hire. Analyst firms said you needed one. Boards asked why you didn’t have one. Companies hired CDOs by the thousands. The role grew from a few hundred positions in 2013 to tens of thousands by 2018. Hiring peaked in 2016.

Then something awkward happened.

PwC and IMD research found the average CDO tenure was just 31 months; the shortest of any C-suite role . More than 75% of CDOs left the company entirely upon exiting the role.

Now, a short tenure in a transformation role can mean different things. Maybe the job was done, and the executive moved on or maybe they got poached. But the pattern across hundreds of companies suggests something else: many organizations created the role because everyone else was creating it, then struggled to define what they actually needed from it. CDOs assumed roles with enormous scope, unclear authority, and expectations shaped by vendor presentations. Deloitte predicted the role would “cease to exist by 2020.” That was hyperbolic, but the underlying observation was valid; many companies had hired for a category (“digital leadership”) rather than for a specific organizational gap.

Today, the message is louder than either of those: you need an AI strategy. Every conference says so. Every vendor says so. Every board member who’s tried ChatGPT says so. The fear of falling behind has reached a pitch that makes the cloud migration frenzy look measured.

The early results are concerning.

A recent MIT report found that about 95% of enterprise generative AI pilots fail to achieve rapid revenue acceleration. IDC research, conducted with Lenovo, found that 88% of AI proof-of-concepts don’t make it to widescale deployment . For every 33 AI POCs a company launched, only four graduated to production.

S&P Global’s 2025 survey of over 1,000 enterprises found that 42% of companies abandoned most of their AI initiatives this yea r, up from 17% in 2024. RAND Corporation analysis puts overall AI project failure rates above 80% , roughly twice the failure rate of non-AI technology projects.

AI projects fail for many reasons: technical complexity, data quality problems, skill gaps, unclear objectives, and scope creep. Consensus alone doesn’t cause these failures, but it creates conditions in which failures become more likely and more costly.

Here’s what I mean by that.

When everyone agrees on a direction, the question shifts from “should we?” to “how fast?” Timelines compress, and the pressure to act becomes disconnected from the time required to act well. Cloud migrations got rushed. CDO hires got rushed. AI pilots are getting rushed. Speed becomes a proxy for seriousness.

When the direction seems obvious, companies skip the hard work of defining what their specific organization needs. They adopt generic solutions to generic problems. “Move to cloud” became the strategy. “Hire a CDO” became the strategy. “Launch AI pilots” is the current strategy. The specifics of which workloads, which capabilities, and which problems get figured out later, if at all.

And when everyone agrees, no one feels responsible for questioning the premise. If the initiative fails, the failure feels external. “The technology didn’t deliver.” “The market shifted.” The unanimous decision provides cover. A failed bet that everyone else also made is easier to defend than a failed bet you made alone.

None of this means consensus is always wrong. Some companies moved to the cloud early and gained real advantages. Some CDOs drove genuine transformation. Some AI pilots are producing measurable results right now. The technology trends underlying the consensus are often real.

The problem is that consensus makes it harder to figure out whether you should act, and how, and when. It substitutes external pressure for internal analysis.

When you find yourself in a room where everyone agrees on a technology direction, these questions help:

  • Is the conclusion specific to our situation, or generic to the market? “We should use AI to reduce our customer service call handling time by 30%” is specific. “We need an AI strategy” is generic. Generic conclusions deserve extra scrutiny. They’re usually shaped by external pressure.
  • What’s the evidence we’ve gathered versus the evidence we’ve borrowed? Have you actually tested the assumption in your environment? Or are you relying on vendor case studies, analyst reports, and competitor announcements? Borrowed evidence is what makes consensus feel so convincing, and so often wrong for specific organizations.
  • What happens if we wait six months? The urgency around technology consensus is almost always overstated. Cloud computing didn’t disappear while companies deliberated. Digital capabilities didn’t evaporate while organizations figured out what they actually needed. The penalty for waiting is typically smaller than that for rushing.
  • Who in this room has a reason to disagree? If no one does, that’s a problem. Either you’ve accidentally assembled a homogeneous group, or the dissenters have stopped dissenting because the social cost feels too high. Neither is good for decision quality.

I don’t have clean data on which companies “won” these technology transitions by being more deliberate. That research doesn’t really exist; we have failure rates, not success profiles. What I have is pattern recognition from watching these cycles repeat.

The companies that seem to come out ahead ask harder questions before committing. They define what they specifically need, not what the market generally recommends. They test assumptions in their own environment before scaling and build in time to learn.

That’s not the same as moving slowly. You can move quickly once you know what you’re building and why. The problem is moving too quickly toward a generic conclusion drawn from the consensus.

AI capabilities have genuinely advanced. Large language models can do things that weren’t possible five years ago. Computer vision has improved dramatically. Some organizations will gain a significant advantage from these technologies. That’s all true.

What’s also true: 88% of AI POCs don’t reach production. And the current frenzy has the same feel as the cloud migration rush and the CDO hiring spree: lots of motion, pressure to keep up, fear of falling behind, and not enough people asking “what do we specifically need?”

When you hear “everyone knows,” slow down. Ask what specific evidence applies to your situation. Ask who benefits from the urgency. Ask what you’d need to see to change your mind.

The phrase “everyone knows” is a signal. Pay attention to it.

This is the kind of strategic question I work through with CEOs navigating technology decisions. If it’s on your radar, I’d be happy to talk it through: ericbrown.com

More on AI reality checks and technology strategy every week in my newsletter: newsletter.ericbrown.com

Like what you're reading?

Get new issues delivered to your inbox. One idea per issue, no spam.