AI isn't the tool (it's just an ingredient)
Why chief AI officers fail in 30 months (and what Capital One did instead)
Between March and December 2024, companies appointed Chief AI Officers at record pace. A 70% increase year-over-year, according to Altrata’s analysis of 35,000 companies. Eli Lilly in October. Boeing in March. PwC US in July. Pfizer in August. The entire U.S. federal government created approximately 100 Chief AI Officer positions in May alone.
The LinkedIn posts are all excitement. Boards are “committed to AI-driven transformation.” Companies are “investing in AI leadership at the highest levels.”
Watch the actual conversations:
“What’s your Chief AI Officer working on?”
“Developing our AI strategy.”
“What’s the strategy for?”
“To guide our AI initiatives.”
“What initiatives?”
“The ones we’ll start once we have the strategy.”
It’s a circle of nothing. A corporate ouroboros eating its own tail. Companies organizing around an ingredient instead of building actual tools.
I know that excitement. I know those conversations.
10 months earlier, I was writing those exact LinkedIn posts. “AI trends for industrial companies.“ “How AI is transforming manufacturing.”
I’d spent months on that campaign. Built audience segments. Written dozens of posts. Had a whole project planned to scale it up.
I was falling for it. Completely.
Then one sixty-minute customer call destroyed everything I thought I knew.
If you only have 5 minutes: here are the key points
Companies are hiring Chief AI Officers at record rates—but often without a clear problem to solve.
Many organizations are caught in circular logic: building AI strategies to guide future initiatives that don’t yet exist.
The author shares a turning point: a customer call revealed that AI prompting frameworks fail in domains with deep, tacit knowledge.
Real value comes not from AI hype but from solving specific problems—like retaining institutional knowledge before experts retire.
Historical parallels (e.g. the Chief Data Officer wave) show that organizing around technology instead of tools leads to confusion and churn.
The right question isn’t “How can AI help?”—it’s “What’s our biggest bottleneck, even if AI didn’t exist?”
Start with the problem. Build the system. Then use AI to enhance it—not define it.
I was on a video call with a customer who needed help with prompting. Standard Tuesday afternoon—him in his home office with engineering drawings tacked to the wall behind him, me trying to teach best practices. Context, clarity, structure. The COSTAR framework (simplified).
I’m Head of Product at MAIA—a knowledge management platform—and I still am sporadically doing customer support sessions myself. Teaching people how to get better results from AI.
This customer wanted specific information about his engineering process. I said what I always said: “You need to give it more context. Like... how would you describe your day-to-day to someone new to your job? Walk me through an average day. Pretend I know nothing.”
He leaned forward, adjusted his reading glasses, and started talking. Slowly at first, then faster as he got into it.
Twenty minutes in, he was explaining why a decision made in 2009 about vendor relationships still affected their material selection.
Forty minutes in, he was walking me through failure modes that only happened when three specific conditions aligned—conditions discovered through six months of testing.
Sixty minutes in, I knew more about engineering complex plastic parts than I ever wanted to. The tolerances. The material properties. The failure modes. The testing protocols. The regulatory requirements. The institutional knowledge about why certain arbitrary-seeming numbers weren’t arbitrary at all.
My cursor was still blinking in the prompt window. We hadn’t even started.
That’s when it hit me—that sick feeling in your stomach when you realize this isn’t going to work.
The context needed here was massive. Years of accumulated knowledge, dozens of edge cases, hundreds of tiny decisions that only made sense if you knew the history. COSTAR and all these prompting frameworks? They don’t scale for domain experts.
Our users don’t need better prompting skills.
They need less “AI” and more “solve my problem.”
I sat there after the call, staring at my other monitor. My LinkedIn campaign dashboard was open. “AI trends for industrial companies”—scheduled posts stretching weeks into the future.
I highlighted everything.
Then I killed it. All of it.
It’s no fun
Except killing it wasn’t the end. It was the beginning of months of hard work and discipline.
That LinkedIn campaign I deleted? It was my most successful content ever. “Love the shit you’re posting here” - I got that comment weekly from our exact ICP. Engagement up 340%.
And I was about to destroy it all because of one sixty-minute phone call.
“We need to stop saying AI,” I told our sales team the next Monday.
Silence.
“But that’s what they’re asking for.”
“I know.”
“That’s what’s working.”
“I know.”
A sales rep looked at me like I’d suggested we stop accepting money.
It took three months to convince marketing. Two months to scrub “AI-powered” from our public roadmap and replace it with “enterprise knowledge management.”
My LinkedIn views dropped 40% in month one.
But then we started developing the alternative. Webinars on product management in industrial companies. Content about knowledge retention. Specific, unsexy, real problems.
The fog cleared.
Focusing on the actual problem forced clarity. It took weeks, but we finally nailed down our exact product principles. Not “democratizing intelligence” but concrete decisions: We capture institutional knowledge. We make it retrievable when people leave. We integrate with existing workflows. (If you’re curious, I believe in transparency)
The product roadmap became obvious. Sales conversations are getting shorter but deeper. Customers who reached out actually need what we built.
The weird part? While I was deleting my AI campaigns, everyone else was doubling down on theirs.
The appointments kept rolling in. The Harvard Business Review publishing thought leadership about “AI transformation.” McKinsey releasing frameworks for “AI maturity models.”
Everyone copying the same playbook. Everyone organizing around the technology.
I’d find myself in these conversations at conferences, on sales calls, in Slack communities. They all sounded the same. The same circular logic, the same empty strategies, the same confusion dressed up as innovation.
“We’re building our AI capabilities.”
“What capabilities specifically?”
“You know, AI-driven insights.”
“Insights about what?”
“That’s what we’re exploring.”
We’ve seen this before
In 1990, Signet Bank (later Capital One) made a massive investment. They turned their credit department into a laboratory. Richard Fairbanks and Nigel Morris tested different credit terms on different customer profiles. For years. The department “lost money.”
But they weren’t losing money. They were building a decision system.
By the time they hired their first Chief Data Officer in 2002—Cathryne Clay Doss—they already knew what problem they were solving: optimize credit decisions. The CDO managed a system that already worked.
Capital One organized around the tool (credit decision optimization). Data and the CDO role supported that tool.
Then everyone else copied the CDO role.
Yahoo hired Usama Fayyad in 2004 as Chief Data Officer. The press release glowed: “responsible for Yahoo!’s overall data strategy, architecting Yahoo!’s data policies and systems, prioritizing data investments.”
Notice the difference? Capital One’s CDO managed a working decision system. Yahoo’s CDO was supposed to... figure out what to do with data.
By 2012, 12% of large organizations had CDOs. By 2018, 68%.
Everyone copied the role. Almost no one copied the system.
Here’s what happened: Average CDO tenure was 30 months. Not because they were fired. Most left “seeking opportunities to create more impact” (MIT Sloan). Corporate-speak for: “I couldn’t figure out what I was supposed to do.”
Harvard Business Review in 2021: “The role is relatively new, so companies are still trying to decide what they want from the person in this position.”
They hired someone, gave them data (the ingredient), and expected value. No clear decision system. No defined problem.
The timeline for companies that copied wrong:
Year 1: Data governance frameworks
Year 2: Some pilots
Year 3: “Where’s the ROI?”
Month 30: CDO leaves or role eliminated
Reading about Capital One, I felt that familiar nausea. I was about to become Yahoo circa 2004. Marketing the ingredient (AI), hoping customers would figure out the tool (knowledge management).
Same mistake. Just twenty years later.
The real problem
Every discovery call starts the same way now.
I changed the question. No more “How can AI help your business?”
Now I ask: “What’s your biggest challenge?”
The answers are always immediate:
“You know what the Damocles sword is for us? People retiring.”
That exact phrase. “Damocles sword.”
Every industrial company says this now. Different words, same terror. Retirements are the nightmare scenario. When senior people leave, decades of institutional knowledge evaporates.
Six weeks ago, a manufacturing VP walked me through their version.
A senior mechanical engineer had just retired after twenty-three years (the average at this company!). He was the only person who understood why certain tolerances in their flagship product were set to what seemed like arbitrary numbers.
But of course, those numbers weren’t arbitrary. They were the result of six months of failure analysis in 2009 after a batch of defective parts nearly cost them their biggest client. The magic number that prevented catastrophic failure under specific stress conditions.
Nobody documented it. The decisions lived in the engineer’s head. Now he’s gone.
The new engineer wanted to “optimize” those tolerances. Made perfect sense on paper—tighter tolerances, better quality. Would have reintroduced the exact failure mode from 2009.
They caught it because the retiring engineer mentioned it in his exit interview. By luck, not by system.
The VP’s voice cracked slightly: “We almost shipped death traps because we forgot why we do what we do.”
This is the pattern everywhere: institutional knowledge walking out the door with no capture system.
Creating an “AI department” doesn’t solve this.
AI can help you retrieve knowledge. But only after you’ve built the system that captures it.
A better question
After I killed my AI campaigns, I changed how I talk to customers.
Here’s how I approach it when I look at businesses.
Old question: “How can AI help your business?”
New question: “If AI didn’t exist, what’s your biggest bottleneck right now?”
Not “your AI bottleneck.” Your actual bottleneck.
The relief in their voices when they can talk about real problems instead of technology dreams—it’s palpable.
Growth? Why is growth blocked? Need to reduce costs or increase revenues? What specifically stops you? Go three levels deeper. Is it sales cycle time? Customer churn? Product development speed?
Operations? What breaks first when demand spikes? Where do you lose efficiency? What manual process eats the most time?
Innovation? What stops your team from trying new approaches? Is it that nobody remembers why you tried something similar five years ago and it failed?
For our customers, the answer is always knowledge management. Specifically: institutional knowledge that exists only in people’s heads and evaporates when they leave.
That’s our bottleneck. That’s what we built Maia to solve.
Your bottleneck is probably different.
Figure that out first. Get specific. Go deep.
Then yes, AI can help you build a better tool to address it.
But the system you build should solve your bottleneck. AI should support that system.
Not the other way around.
Stop organising around AI
Your senior engineer leaves next month. She’s the only person who understands why your payment system is architected the way it is. The undocumented decisions, the edge cases only she remembers, the reasons for seemingly arbitrary choices.
Capture that knowledge first. Make it searchable. Build the system.
Then AI can help you retrieve it.
Capital One understood this in 1990. They organized around credit decisions. The data and technology supported the decision system.
Everyone else organized around the technology and wondered why it didn’t work.
I almost made the same mistake. Three months from doubling down on campaigns about “AI trends.” Three months from becoming another Yahoo-style failure. Three months from contributing to the noise instead of solving real problems.
The relief of catching yourself before the cliff—it’s physical. Like dodging a car accident and feeling your whole body vibrate with unused adrenaline.
AI is an ingredient. You can’t bake bread by organizing your kitchen around flour.
You can’t retrieve what you never captured.
And you can’t solve problems by hiring someone to figure out what problems AI might solve.
The companies appointing Chief AI Officers right now? They’re about to learn what the CDO wave taught us: organizing around ingredients instead of tools is a recipe for thirty months of expensive confusion.


