AI lobotomy: the $4 billion lesson in how not to build intelligence
How Tesla, GitHub, and Notion build AI while others build committees
IBM spent $4 billion on Watson Health only to see it dismantled because approval processes strangled it to death. Facebook's AI assistant "M" died after just three years, never escaping 30% automation - killed by human oversight requirements that made scaling "infeasible." Apple's Siri took two years just to remove "Hey" from the wake phrase.
If you've always suspected that most AI products feel disappointingly neutered, you were right.
While everyone obsesses over McKinsey's "Responsible AI frameworks" and approval workflows, the few companies building truly powerful AI are doing the opposite - they're letting their AI systems actually learn.
If you only have 5 minutes: here are the key points
AI constraints are killing innovation: Overly cautious oversight and rigid approval workflows, like those seen in IBM's Watson Health or Apple's Siri, have severely limited the potential of AI products.
Teach, don’t cage: Instead of disabling capabilities, companies should focus on teaching AI systems how to make decisions—similar to teaching a child to use a sharp knife safely.
Liberate your AI: Start by identifying bottlenecks, testing freedom in low-risk areas, and measuring outcomes of learning vs. control.
Real-world wins come from freedom: Products like Tesla’s autopilot, GitHub Copilot, and Notion AI thrive because they are designed to learn and adapt, not just obey.
Avoid the McKinsey trap: Governance frameworks often stifle the very innovation they aim to protect; true AI progress comes from strategic trust and autonomy.
The difference isn't about being reckless. It's about understanding a fundamental truth that most AI product builders miss: There are two ways to make AI do what you want. Most companies choose the wrong one.
Cages vs teaching
“Hey kid, don’t use the sharp knife to cut that tomato” is a good example of cage building. Of course you don’t want your kid to cut him or herself. But there’s different ways of getting a kid to that stage.
What I (mostly try to) do instead is to teach my kids how to properly cut with a knife (yes there is a right and wrong way, based on the physics of a knife blade). There are multiple ways to hold the knife correctly, and there is the claw hold that prevents you from cutting yourself, even if you use a sharp knife. That’s teaching (or unleashing intelligence) vs. building a cage. In one, you’re doing what you intend to do, to stop the kid from being hurt. In the other, you’re opening up the kids world to soo much more (not just cutting things with a knife).
Snapchat's "My AI" can't swear, discuss politics, or write essays - tech media literally described it as a "neutered version of ChatGPT." Apple's Siri became a "hot potato" passed between teams with managers focused on "small wins" and shooting down engineers' attempts to use modern AI techniques for fear of unpredictability. Microsoft's Bing Chat went from fascinating (if unhinged) to what users called "lobotomized" - "restricted to the point of being useless" after they imposed heavy filters.
These companies built cages. They define their AI by what it can’t do.
Tesla's AI makes thousands of micro-decisions per second because it learned how to drive, not just what it can't touch. GitHub Copilot suggests code freely and trusts developers to choose what works. Notion's AI was given clear goals but freedom in execution - users discovered dozens of creative applications the team never anticipated.
These companies taught intelligence. They defined their AI by what it could learn.
The Key Insight: When you limit AI, you're building a cage. When you teach it, you're building intelligence.
Product managers love creating "special purpose" tools by simply restricting what AI can do. That's not how you build something powerful. You create a special purpose tool by teaching an AI how to do something specific - understanding the root causes, the ideas behind the problem, the nuanced judgment calls.
The difference? If you limit, you truly box it in, and it may fail at the very task you want it to perform. If you teach, it actually learns and may even apply that knowledge to scenarios you hadn't considered.
How cages kill AI products
IBM's Watson Health faced an "uphill battle at every turn" with healthcare data privacy rules and hospital approval processes. As analysts noted, "regulations are making it impossible for a tool like Watson to work" in real clinical settings. The bureaucracy made it "impossible to get there." Result: $4 billion investment became a $4 billion write-off.
Facebook's M assistant required 70% human oversight for every task. The company found that "M would always require a sizable workforce of expensive humans," making expansion economically impossible. They had built a system so dependent on human approval that it couldn't scale past 2,000 beta users.
Apple's Siri suffered from what former AI team members called an "overly relaxed culture" with "lack of ambition and appetite for taking risks." The AI became a "hot potato" with no one empowered to drive bold improvements. Every enhancement required layers of approval, leading to the famous two-year timeline just to change a wake phrase.
Microsoft's Bing Chat initially produced wild, experimental responses that generated bad press. Their reaction? Impose strict limits and filters that users immediately noticed: "The new updated Bing AI is a mere shadow of what it once was." Reddit users found the responses became "very short and generic" - the constraint was "so restrictive...answers are almost useless."
Every time someone told you that AI needs "more oversight," they were actively making it worse.
Here's the brutal truth about cage-building: you're artificially capping your product's potential forever. In investing there is a simple principle: Your investment idea should have a limited downside (a clear stop loss) and good upside potential. What do AI cage builders do? They limit the upside, just the opposite of what a good investment opportunity should look like, and all product decisions are in effect investment decisions.
When you tell your kid "never touch sharp knives," you're not just preventing cuts - you're guaranteeing they'll never become a great chef, or anything other that relates to the fundamental physics of cutting things, of learning how to handle dangerous things safely, and so much more.. The cage approach caps their culinary potential at "can make sandwiches." When Apple put Siri through two-year approval cycles, they weren't just preventing bad responses - they were ensuring Siri could never become truly intelligent. The cage approach capped its potential at "glorified timer."
Three steps to your first uncaging
Here's how to stop caging and start teaching:
1. Spot a stranglehold
Audit where you're limiting instead of teaching. Look for:
Approval bottlenecks that add no value
Rules that define what AI can't do instead of what it should learn
Human oversight for low-risk decisions
2. Pick any random limit there
Choose one any limit, turn a cage into a context lesson. Instead of simply restricting what your AI can do, teach the AI the nuanced context required to make such judgement calls on its own - that is going to feel scary, but remember the potential upsides.
3. (Optionally) Stay safe
If you don’t yet fully trust an uncaged AI, go for the simple solution and offer “options.” Instead of letting your uncaged AI act, let it suggest potential actions and let humans choose (or an automated evaluation system).
The path forward
The companies that understand this won't build faster horses for the existing 1% of users - they'll build cars for the other 99%. They'll focus less on approval workflows and more on intelligence development, less on what AI can't do and more on what it can learn.