
I recently asked Claude to help me think through the structure of my next book. I already knew where the weak point was – I just wanted to see what Claude would come up with. I was amazed by its response: Claude didn’t just correctly identify the problem. It suggested how to use an idea from a paper I had published more than a decade ago to fix it. In other words, Claude had discovered something in my own writing that I hadn’t even thought of.
But Claude also took 10 minutes to respond. Ten minutes of GPU clusters drawing enough power for a small apartment, constructing chains of reasoning no human will ever see. All to revise an outline. The result was worth every penny of my $100 monthly subscription. I do this dozens of times a day. I can’t be sure how much money Anthropic is losing on me – but it’s a lot.
Anthropic recently raised $30 billion at a $380 billion valuation. OpenAI, which raised $40 billion in its last fundraising round, is seeking another $100 billion. With users like me, they’re going to need it. Investors see the most important companies in the world. I see the most vulnerable.
My longtime friend and mentor Clayton Christensen spent his career explaining why. Successful companies improve their products faster than most customers require. Their cost structures are built to serve their most demanding users. And then they get eaten from below by cheaper alternatives that serve everyone else. It happened in everything from disk drives to excavators. The best companies didn’t fail because they made mistakes, they failed because they did everything right.
The frontier AI companies are building exactly the kind of cost structure Christensen warned about at a scale he could never have imagined. No companies in the history of capitalism have ever paid their employees like this. Meta has offered AI researchers packages worth $200 million. OpenAI spent $6 billion on stock-based compensation in 2025 alone – nearly half its revenue. For most tech companies preparing to go public, that number is closer to 6%.
The salaries are just the beginning. OpenAI alone has pledged to spend $1.4 trillion on data centers over the coming years, much of it on GPU clusters that depreciate faster than almost any other capital asset. The result: OpenAI projects $14 billion in losses for 2026 and $115 billion cumulative through 2029. This can’t be unwound. The salaries can’t be cut without losing indispensable researchers. The GPUs lose value whether or not they generate returns.
Here’s the trap: AI companies charge by the token, the basic unit of text their models process. According to one study, the cost per token for a given performance level is falling fast, roughly five to tenfold per year. That sounds like salvation. But frontier models don’t just get smarter. They get hungrier.
Reasoning models, the main current approach for improving LLM performance, work by generating enormous chains of computation before producing an answer. As performance improves, those chains consume exponentially more tokens. OpenAI reported, for example, that in 2025 their customers’ token consumption went up by 320x. The same study found that rising demand for reasoning tokens swamps the cost savings. Each unit of intelligence gets cheaper – but overall, intelligence has never cost more.
I can see it in my own usage. The tasks I give Claude today would have been unthinkable a year ago. Each one is cheaper per token. I use 10 times as many tokens.
You can also see this playing out across the industry. The software company Notion reported that AI costs ate 10% of its profit margins, and Notion isn’t even running frontier models. It offers summarization and search. Microsoft’s GitHub Copilot was reportedly losing more than $20 per user per month while charging $10. If basic AI features bleed margins and popular AI tools lose money on every customer, the companies expanding the frontier have it worse.
But for someone who’s only selling last year’s model, the economics can be spectacular. Yesterday’s frontier is today’s commodity, and commodity providers don’t carry the cost of pushing the boundary forward. They can offer the same capability at a fraction of the price. The frontier companies can’t match those prices without accelerating their own losses. And they don’t even want to. Their entire strategy is to keep pushing capability higher, which means focusing on the most demanding users and letting the rest go.
This isn’t a mistake. But it is a trap.
You can see it happening in real time. OpenAI recently retired its GPT-4o model. Open AI had tried to kill 4o once before, last August, and was forced to reverse course by user revolt. Users had preferred the older model’s warmth and conversational style over GPT-5’s superior capability. When it died, they created communities to mourn it. They threatened to cancel subscriptions. They created memes of hamsters burning down OpenAI (I admit I don’t get that one). Only 0.1% of ChatGPT’s 800 million users were still choosing 4o daily. That’s 800,000 people. It’s a rounding error to OpenAI and a midsize city of customers telling the company they didn’t want more intelligence. They wanted a chatbot that felt like a friend. The frontier had overshot them and they knew it, even if OpenAI didn’t.
The broader data confirms what the 4o revolt illustrates. A RAND report found that Chinese AI models operate at one-sixth to one-quarter the cost of comparable US systems. The performance gap between open-source and proprietary models collapsed from 8% to 1.7% on a standard benchmark in a single year. DeepSeek prices its API at roughly 90% below OpenAI’s comparable offerings.
These cheaper alternatives take the customers that frontier companies could never serve profitably. A disruptive entrant without $200 million researcher salaries can serve the same user at a fraction of the price and make money doing it. The frontier companies are left with the most demanding users and the highest costs. In Christensen’s disk-drive cases, the lag between frontier and commodity was measured in years. In AI, it’s measured in months.
The frontier companies’ best defense is inertia. Companies that have built workflows around a frontier provider’s API face real friction in switching. That friction erodes every quarter as open-source alternatives make switching easier. Lock-in delayed disruption in Christensen’s cases, too – but it never prevented it.
The obvious objection: Intelligence isn’t a disk drive. There may be no ceiling on useful capability, and the most demanding users may never be overshot. But Christensen’s theory doesn’t require a universal ceiling. It requires that the improvement trajectory outpaces what some customers need – and that those customers represent most of the market. The 800,000 users who clung to GPT-4o weren’t saying AI had stopped improving. They were saying the next increment wasn’t worth it to them.
Christensen’s theory has limits. Jill Lepore has argued that some of his supposedly disrupted companies adapted and survived. No social science theory works under all circumstances. But the cost structures here are extreme, the lag between frontier and commodity is measured in months, and viable low-cost alternatives already exist at scale. If the theory applies anywhere, it’s here.
Of course, none of this applies if one of these companies actually builds a cybernetic God. Christensen’s theory describes what happens when innovation is continuous and incremental. AGI would be something else entirely, a discontinuous leap that operates by different economic rules. This deserves to be taken seriously, which is exactly why the problems with it matter. If AGI doesn’t arrive, the disruption framework applies in full. If it does, conventional economics may be meaningless, or the value gets split among firms that each spent hundreds of billions getting there.
The companies spending the most to advance AI may be the least likely to profit from it. Christensen would have recognized the pattern. I’m the demanding customer these companies are spending billions to keep. The users they keep ignoring will be the profitable ones – just not for the companies that made them possible. By creating technologies that others will end up profiting from, the frontier labs and their investors may well be performing the largest act of involuntary philanthropy in history.
– – –
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Gautam Mukunda writes about corporate management and innovation. He teaches leadership at the Yale School of Management and is the author of “Indispensable: When Leaders Really Matter.”



