"This isn't what I signed up for."
One of our ML engineers was standing in my office, resignation letter in hand. They'd joined nine months earlier, brilliant and eager to build cutting-edge machine learning systems. Instead, they'd spent those months writing SQL queries and building data pipelines—mundane infrastructure work that any junior developer could handle.
They were right to quit. I'd promised them AI innovation and delivered data janitorial work. But their departure revealed something darker than just my failure as a manager: the entire AI adoption playbook that everyone's selling is complete bullshit, and we're all pretending otherwise.
Here's what nobody wants to admit: No company actually knows how to adopt AI successfully. Not your consultants with their "AI transformation roadmaps." Not the vendors pushing "enterprise AI platforms." Not the executives demanding "AI strategy" for their investor decks. We're all running the same doomed playbook, hoping this time will be different.
I know because I ran that playbook perfectly. And it destroyed my team.
If you only have 5 minutes: here are the key points
“AI adoption” fails because it demands company-wide behavioral change, not just new tech.
Most ML hires end up doing plumbing, not AI, and burn out or quit.
POCs die when real business workflows refuse to bend to black-box predictions.
Dashboards, embedded analysts, and transparent data win more trust—and generate more impact—than any fancy model.
Until orgs reorganize around algorithmic decision-making from day one, the transformation narrative is theater.
The theater of AI readiness
I'd been evangelizing AI across our 700-person company for months. The recipe seemed clear—every conference keynote, every vendor pitch, every thought leader blog post laid out the same steps:
Build an ML team. Check—I hired PhDs and talented engineers who could debate neural network architectures in their sleep.
Get executive buy-in. Check—I had multiple VPs excited about predictive analytics, automated insights, risk modeling. The head of operations was practically salivating over optimization possibilities.
Identify use cases. Check—one team had already generated millions with their ML solution. I'd identified three more opportunities just as large.
Build proof of concepts. Check—our prototypes showed impressive metrics and clear business value. The algorithms revealed patterns that could transform how we operated.
Present the ROI. Check—my presentations had all the projections, competitive analyses, and implementation roadmaps any executive could want.
I did everything right according to the "AI Adoption Best Practices" that everyone preaches. And that's exactly why it failed.
The question that kills everything
"Okay, but are you sure you can get the operations team to integrate that into their workflow system? And will the sales team actually act on these predictions?"
That question came up in every presentation, and it revealed the massive lie at the center of AI adoption: it's not about adopting AI. It's about reorganizing your entire company around algorithmic decision-making. And nobody—not the vendors, not the consultants, not the thought leaders—wants to admit that because it would kill the entire industry.
Think about what AI actually demands:
Sales teams trusting algorithms over relationships they've built for years
Operations teams redesigning processes around black-box recommendations
Finance teams accepting predictions they can't audit
Every team changing how they work to accommodate machines
That's not adoption. That's enterprise-level transformation. It's asking humans to subordinate their judgment to statistical models. And every company saying they're "ready for AI" is lying—they just don't know it yet.
"Sven, no," the CEO finally said, cutting off my third attempt to push our AI strategy. "I want you to work on something else."
He was right to shut it down, but for the wrong reasons. He thought we weren't ready. The truth is, nobody is ready because the entire framework is fantasy.
The human cost of pretending
Dissolving the ML team meant admitting the whole thing was theater. These weren't just employees—they were people who'd bet their careers on our AI vision. They'd turned down other offers, relocated, invested their time in problems that would never see production.
One data scientist transitioned to traditional analytics—a demotion for someone with her qualifications. Two engineers immediately started job hunting. But that one engineer who quit—their departure hit different.
Nine months earlier, they'd joined full of ideas about novel approaches, reading papers on weekends, constantly proposing new architectures. By month three, they were building ETL pipelines. By month six, they were writing documentation for data marts. By month nine, they were gone.
"This isn't what I signed up for."
They thought they were calling out my failure. They were actually calling out an entire industry's lie.
What actually worked
After the ML team imploded, I did something radical: I stopped pretending AI transformation was possible and started building what companies actually need.
I doubled the team working on data—focused full speed on data infrastructure. But not the kind vendors want to sell you. No "AI-first architecture." No "ML ops pipeline." No "center of excellence" bullshit.
Instead:
Embedded analysts in product teams, not centralized AI wizards nobody trusts.
Self-service tools for humans, not black-box models that make decisions for them.
Solving today's problems, not tomorrow's transformation.
Clear, accessible dashboards that people actually used, not complex predictions they couldn't understand.
We gave people data they could understand and act on. Not predictions they had to trust blindly. Not algorithms making decisions for them. Just clear, accessible information that helped them do their jobs better.
The result? Actual adoption. Actual impact. Sales teams used our tools daily. Operations improved their processes. Marketing made better decisions. Not because we transformed them with AI, but because we gave them what they actually needed: trustworthy data they could understand.
But here's why nobody wants to sell this approach: it's not sexy. You can't charge millions for "we'll help you build good dashboards and embed analysts in your teams." You can't raise venture capital on "we make data accessible." You can't keynote conferences with "AI is mostly unnecessary for 99% of companies."
So instead, the entire industry keeps selling the transformation fantasy.
The lies we tell ourselves
Every AI vendor knows their enterprise customers aren't ready for actual AI adoption. They know those million-dollar platforms will become expensive experiments. They know the POCs will never reach production. But they sell it anyway because that's what enterprises want to buy—the dream of transformation without the reality of change.
Every consultant knows their "AI maturity model" is fiction. They know their roadmaps lead nowhere. They know the organizational change required is impossible for most companies. But they sell it anyway because that's what executives want to hear—that they're just a few steps away from AI-driven competitive advantage.
Every executive knows they're not actually going to reorganize their company around algorithms. They know their teams won't trust black-box recommendations. They know they're buying theater. But they buy it anyway because they need "AI strategy" in their investor presentations.
And people like me? We knew deep down that our ML engineers would end up doing data infrastructure work. We knew those cutting-edge models would never see production. We knew we were selling false promises. But we did it anyway because we wanted to believe we were different.
AI adoption
You can't force a company into adopting AI because AI adoption, as it's currently conceived, is impossible for 99% of organizations. It requires a level of transformation that would fundamentally break most companies.
The successful "AI companies" everyone points to—Google, Meta, Netflix—didn't adopt AI. They were built around algorithms from day one. Their entire organizational DNA assumes algorithmic decision-making. That's not transformation; that's architecture.
For everyone else, the choice isn't "adopt AI or get left behind." It's "pretend to adopt AI or admit the emperor has no clothes."
The next time someone pitches you on AI transformation, ask them this: "Who will lose their job when the algorithm makes a bad decision?"
Watch them scramble for an answer that doesn't exist. Because accountability can't be transformed, trust can't be automated, and judgment can't be algorithmic.
That's not a technology problem. That's why nobody actually knows how to adopt AI.
We're all just pretending otherwise.