Thursday, February 26, 2026
๐Ÿ›ก๏ธ
Adaptive Perspectives, 7-day Insights
AI

Is AI Artificially Cheap?

AI companies are collectively losing tens of billions of dollars per year while charging consumers and businesses prices that don't cover costs. The Uber playbook suggests those prices won't last โ€” but the history of computing suggests something more nuanced.

Note: This post was written by Claude Opus 4.5. The following is an analysis of publicly available financial data, industry reporting, and historical technology pricing patterns.

In October 2024, Sam Altman told an audience at Harvard that the idea of putting ads in AI was “uniquely unsettling” to him. “I will disclose, just as a personal bias, that I hate ads,” he said.

Fifteen months later, in January 2026, Altman announced that ChatGPT would begin showing ads to free users. “It is clear to us that a lot of people want to use a lot of AI and don’t want to pay,” he wrote on X.

Something changed between those two moments. What changed was math.

OpenAI projects a $14 billion loss in 2026. Anthropic spent more than 100% of its revenue on AWS compute alone through September 2025. The five largest hyperscalers plan to spend over $600 billion on AI infrastructure in 2026. Google’s own CEO has acknowledged “elements of irrationality” in the industry’s AI spending.

The question isn’t whether AI is artificially cheap right now. The evidence is overwhelming that it is. The question is what happens next โ€” and whether history offers any useful guide.

The Uber Playbook

The closest analogy to what’s happening in AI is the ride-hailing industry, and the story of Uber is worth understanding in detail.

Uber launched in San Francisco in 2010 as a premium black car service โ€” roughly 1.5 times the cost of a taxi. In 2012, it introduced UberX, pivoting from luxury to mass market. Then, starting in 2013, the company began aggressively cutting fares to gain market share. In Dallas-Fort Worth, per-mile rates fell from $1.90 in November 2013 to $0.85 by August 2015 โ€” a 55% reduction in under two years. In Detroit, rates were slashed from $0.70 per mile to $0.30.

How could they afford this? They couldn’t. In 2015, Uber passengers were paying only 41% of the actual cost of their trips. Venture capital โ€” from Goldman Sachs, BlackRock, Jeff Bezos, the Saudi government, and others โ€” covered the other 59%. Internal Uber presentations described this strategy as “buying revenue.” The subsidy amounted to roughly $2 billion per year in 2015 alone.

Then the music stopped.

As Uber prepared for its May 2019 IPO, prices started rising โ€” at roughly 18% per year, nearly four times the rate of inflation. Average fares jumped 30% from early 2018 to the IPO, then another 41% over the next three years. By 2021, Uber rides were 92% more expensive than in 2018, according to Rakuten data. In New York City, the average base fare rose 36% between February 2019 and February 2023.

Simultaneously, driver compensation moved in the opposite direction. Uber’s stated service fee was 25%, but independent research from the National Employment Law Project found the actual average “take rate” had reached approximately 40% by 2023, with individual rides sometimes hitting 65-70%. In 2023, U.S. drivers made 17% less on average than the year before, despite working only 3% fewer hours.

After 14 years and approximately $33 billion in cumulative operating losses, Uber posted its first annual profit in 2023 โ€” $1.9 billion on $37.3 billion in revenue. On Valentine’s Day 2024, it announced a $7 billion stock buyback. The stock, which had opened below its $45 IPO price and spent four years underwater, eventually climbed above $100 in late 2025.

The lesson from Uber is clear: the subsidized price was never the real price. The real price arrived once the company needed to be profitable. Riders paid more. Drivers earned less. The service remained useful but became more expensive than the taxis it originally undercut. Today, a median UberX ride in New York costs about $23.50 versus $19.50 for a yellow cab.

The AI Money Furnace

The parallels between Uber’s growth phase and today’s AI industry are striking.

OpenAI reported $3.7 billion in revenue and a $5 billion loss in 2024. By 2025, revenue grew to roughly $13 billion, but losses grew to approximately $8 billion โ€” described by the Financial Times as “an era-defining money furnace.” The company projects $14 billion in losses for 2026. Deutsche Bank estimates cumulative negative free cash flow of $143 billion from 2024 through 2029, with profitability not expected until 2029 or 2030.

The losses are structural, not incidental. Altman has admitted that OpenAI loses money on its $200-per-month ChatGPT Pro subscription. “I personally chose the price and thought we would make some money,” he said in January 2025. “People use it much more than we expected.” OpenAI’s inference spending was $3.8 billion in 2024 and hit $8.65 billion in just the first nine months of 2025 โ€” potentially exceeding total revenue for the period.

Anthropic had negative 109% gross margins in 2024. Through September 2025, the company spent $2.66 billion on AWS compute against an estimated $2.55 billion in revenue โ€” spending more than its entire revenue on AWS alone, before accounting for Google Cloud costs, salaries, or anything else. The company has raised $27.3 billion in funding, including $8 billion from Amazon.

xAI (Elon Musk’s AI venture) has raised $42 billion while generating roughly $500 million in revenue โ€” burning approximately $1 billion per month. It sold Grok to the U.S. government at 42 cents per user.

The hyperscalers are spending at unprecedented scale. Google’s 2025 capital expenditure was $91-93 billion, revised upward three times during the year. Microsoft spent $34.9 billion in a single quarter. Amazon’s first nine months of 2025 saw $92 billion in CapEx, up from $55 billion in the same period of 2024. Meta plans over $110 billion in 2026. The five largest hyperscalers are collectively on track to spend over $600 billion on AI infrastructure in 2026.

One analysis projects the hyperscalers will accumulate roughly $2 trillion in AI assets by 2030, with annual depreciation exceeding their combined profits. The industry is building infrastructure today that it must monetize at scale tomorrow โ€” or face enormous write-downs.

It’s Not Just Uber

The VC subsidy pattern extends well beyond ride-hailing. Across the tech industry, the playbook has been the same: use investor capital to offer below-cost pricing, acquire a dominant user base, then raise prices once alternatives have been weakened or eliminated.

MoviePass offered unlimited movie theater tickets for $9.95 per month โ€” far below cost. It acquired millions of subscribers, then filed for Chapter 7 bankruptcy.

WeWork was valued at $47 billion at peak VC funding from SoftBank, burned through cash at staggering rates, and filed for bankruptcy in November 2023.

Netflix launched standalone streaming at $7.99 per month in 2010. The standard ad-free plan now costs $17.99 โ€” more than double. The premium tier has tripled to $24.99. In November 2022, Netflix introduced ads for the first time.

Spotify held its individual premium price at $9.99 per month for a full decade (2012-2022), then raised it four times in four years, reaching $13.99 in January 2026. It posted its first-ever annual profit in 2024 โ€” the same year it cut headcount by 20%.

The pattern is consistent: low prices build adoption, investors subsidize the gap, and prices rise once the company needs to demonstrate profitability. Some companies survive the transition (Uber, Netflix, Spotify). Others don’t (MoviePass, WeWork).

What Makes AI Different

If this were the whole story, the conclusion would be simple: AI is artificially cheap, prices will rise, plan accordingly. But AI has something Uber never had โ€” a technology cost curve that works relentlessly in its favor.

The 280x Collapse

The most important statistic in AI right now is from the Stanford HAI 2025 AI Index Report: the inference cost for a system performing at GPT-3.5’s level dropped from $20.00 per million tokens in November 2022 to $0.07 per million tokens by October 2024. That is a 280-fold reduction in 18 months.

On harder benchmarks, the pattern holds. The cost of models scoring above 50% on GPQA โ€” a substantially more challenging test โ€” fell from $15 per million tokens in May 2024 to $0.12 by December 2024.

This isn’t a subsidy. This is engineering.

Hardware Is Getting Radically Better

AI chip performance is improving at 43% annually, doubling every 1.9 years. Each generation of hardware delivers dramatic gains:

  • NVIDIA’s Blackwell architecture achieved a 25x reduction in energy consumption and cost versus Hopper for inference on large models.
  • NVIDIA’s next-generation Rubin (announced for Q2 2026) promises a further 10x reduction in inference token cost versus Blackwell, with 8x better performance per watt.
  • Google’s seventh-generation TPU (Ironwood) is 100% better in performance per watt than its predecessor and “arguably on par with Nvidia Blackwell,” according to SemiAnalysis.
  • Amazon’s Trainium3, entering deployment in early 2026, promises double the performance of Trainium2 with 40% better energy efficiency.

ARK Invest has applied Wright’s Law โ€” which says costs decline by a fixed percentage for every cumulative doubling of production โ€” to AI accelerators and found a 37.5% cost decline per cumulative doubling, translating to a 48% compound annual decline in costs since 2014. That learning rate is nearly double that of solar panels and lithium-ion batteries.

Software Efficiency Is Compounding

It’s not just hardware. Techniques like mixture-of-experts architectures, model distillation, quantization, and test-time compute scaling are making AI dramatically more efficient:

  • DeepSeek’s R1 model achieves comparable performance to OpenAI’s o1 at roughly 40 times lower cost, primarily through engineering innovation โ€” activating only 37 billion of its 671 billion parameters per token.
  • Model distillation can compress a large model to 1.1% of the original size while retaining 90% of performance.
  • Open-weight models closed the performance gap with closed models from 8% to just 1.7% on some benchmarks in a single year.

The Stanford report estimates that, depending on the task, LLM inference prices have fallen anywhere from 9x to 900x per year.

The Historical Precedent: Computing Has Always Gotten Cheaper

The cost of a unit of computing power has fallen by more than ten orders of magnitude over 60 years. A gigaflop that cost $18.7 million in 1984 costs fractions of a cent today. Storage went from $300,000 per gigabyte in 1980 to under two cents. AWS has reduced prices 134 times since 2006. Broadband internet โ€” adjusted for quality and inflation โ€” is 60% cheaper than it was in 2015.

This is the single strongest argument against the “AI will just get more expensive” thesis. The underlying technology follows relentless cost curves that have held for decades. Sixty years of computing history says the cost of raw capability goes down.

The Jevons Paradox: Why It’s Both

Here’s where it gets complicated. In 1865, economist William Stanley Jevons observed that as coal engines became more efficient, total coal consumption actually increased โ€” because cheaper energy made more applications economically viable.

The same dynamic is playing out in AI and cloud computing more broadly. Global cloud spending hit $90.9 billion in Q1 2025 alone, up 21% year over year, despite falling per-unit costs. Cloud services now account for 33% of enterprise IT budgets. Microsoft reported a 50% reduction in cost per token alongside 37% growth in Azure revenue.

Enterprise AI spending rose 36% between 2024 and 2025 โ€” from about $63,000 to $85,000 in average monthly spend โ€” even as per-unit costs plummeted. Organizations planning to invest over $100,000 per month in AI tools more than doubled, from 20% to 45%.

When Satya Nadella saw DeepSeek’s breakthrough in cost-efficient AI, he didn’t say “great, we’ll spend less.” He tweeted: “Jevons paradox strikes again!” โ€” understanding that cheaper AI means more AI, not less spending.

The likely trajectory is that the cost per unit of AI capability will continue to fall rapidly while total spending on AI by both businesses and consumers will continue to rise. These aren’t contradictory statements. Cheaper AI makes more use cases viable, which drives more consumption, which means more total spending even at lower unit prices.

Who Survives

If AI remains subsidized in the short term but follows computing cost curves in the long term, the competitive landscape depends on who can survive the transition.

The hyperscalers have the clearest path. Google treats AI as an “ecosystem amplifier” โ€” a way to strengthen its $264 billion advertising business rather than a standalone revenue center. Amazon books paper gains on its Anthropic investment while collecting billions in AWS compute fees from the same company. Microsoft integrates AI into Office 365, extracting value through existing enterprise relationships. Meta is spending $60-100 billion annually on AI infrastructure, but primarily to enhance its $200 billion advertising engine.

These companies can sustain AI losses indefinitely because they have other businesses to subsidize them. The risk is that those other businesses eventually strain under the weight. Bank of America projects that AI capex could consume up to 94% of the hyperscalers’ operating cash flows by 2026. Google CEO Sundar Pichai himself has acknowledged that “no company would be immune” if the AI spending bubble bursts.

Apple stands out as the contrarian. While its competitors committed a collective $380 billion in 2025 CapEx, Apple spent $12.71 billion โ€” less than it spent in 2018. It’s sitting on roughly $130 billion in cash. Analysts describe this as deliberate restraint: if AI valuations collapse, Apple has the resources to acquire or partner at favorable terms. Its focus on on-device AI through custom silicon rather than cloud-scale infrastructure is a bet that the data center buildout of 2024-2026 will look overbuilt in hindsight.

OpenAI and Anthropic face the hardest path. Both are losing billions annually with no profitable business to subsidize them. OpenAI’s pivot to advertising โ€” the “last resort” business model its CEO claimed to hate โ€” signals desperation, not strategy. The company is asking advertisers to pay $60 per thousand views, triple what Meta charges. OpenAI’s CFO has outlined plans for “outcome-based pricing” that would give the company a share of revenue from scientific discoveries made using its tools โ€” a move that would fundamentally change the economics of using AI for research.

Anthropic has raised $27.3 billion and is targeting $20-26 billion in revenue for 2026, but its gross margins were negative as recently as 2024. Both companies are expected to file S-1s by the end of 2026, at which point public market investors will impose the same discipline on AI that they eventually imposed on Uber.

40-60% of AI startups face failure or acquisition in 2026, according to venture capital consensus. Companies built as “wrappers” around foundation models โ€” adding a user interface but no proprietary data or workflow โ€” have no defensible position as the foundation models themselves add those features. The shakeout is already underway.

What This Means for Businesses

The AI pricing picture creates a genuine strategic dilemma for enterprises.

Current prices almost certainly don’t reflect true costs. When OpenAI admits it loses money on a $200-per-month subscription, the $20-per-month tier and the free tier are even further underwater. When Anthropic spends more than 100% of revenue on compute alone, the API prices it charges are definitionally below cost. Businesses building workflows around these prices should expect them to change.

But “prices will rise” is too simple. The per-unit cost of AI capability is falling at 280x over 18 months, or roughly 10x per year depending on the task. If that trajectory holds even partially, the AI you use in 2028 may cost less per query than what you pay today, even after subsidy removal โ€” simply because the underlying technology is getting that much more efficient.

The real risk is vendor lock-in, not higher prices. As enterprises consolidate from 15-25 AI vendors down to 3-5 strategic platforms, switching costs become the binding constraint. Custom fine-tuned models, proprietary prompts, embedded workflows, and institutional knowledge of a specific platform’s capabilities create dependencies that providers can monetize. Microsoft has already announced that most Microsoft 365 plans will increase in July 2026 โ€” framing it as the end of the “AI experimentation era.”

Contract structure matters more than current price. OpenAI’s standard enterprise terms allow price changes with as little as 14 days’ notice. Businesses should negotiate price locks, avoid overcommitting to token volumes that may go unused (OpenAI typically won’t refund unused API credits), and maintain multi-provider strategies to preserve negotiating leverage.

Hardware costs are a separate concern. We’ve covered the RAM shortage extensively on this site. The memory crisis is structural โ€” data centers will consume 70% of all memory chips in 2026 โ€” and it is driving up the cost of PCs, smartphones, and consumer electronics. Businesses planning hardware refreshes should budget for 15-20% higher costs and consider accelerating purchases before prices rise further.

The Honest Answer

Is AI artificially cheap right now? Yes. The evidence is unambiguous. Every major AI company is losing money at scale. The prices consumers and businesses pay do not cover costs. Investor capital is subsidizing the difference.

Will AI get more expensive? Almost certainly in some ways. Subscriptions will rise, free tiers will shrink or become ad-supported (as OpenAI’s already has), and AI companies will find creative ways to capture more value โ€” including outcome-based pricing, premium tiers, and reduced capabilities at lower price points.

Will AI get cheaper? Also yes, and this is what makes it different from Uber. The underlying technology is improving at rates that have few precedents in industrial history. A 280-fold cost reduction in 18 months is not a subsidy โ€” it’s engineering progress that is likely to continue as hardware improves, architectures become more efficient, and competition drives innovation.

The most likely outcome is a bifurcation: frontier AI capabilities โ€” the newest, most powerful models โ€” will command premium prices. Yesterday’s frontier becomes today’s commodity. The $30-per-million-token GPT-4 of 2023 has already been superseded by models that cost $0.15 per million tokens at comparable quality. The pattern will repeat: today’s expensive capabilities will be cheap in two years, but tomorrow’s capabilities will carry tomorrow’s premium.

For businesses, the practical guidance is:

  1. Build on AI, but don’t build on one AI. Multi-provider strategies and abstraction layers protect against price increases and vendor lock-in.
  2. Expect subscription prices to rise. Budget for 10-20% annual increases in per-seat AI costs, offset partially by falling per-token API costs.
  3. Watch for the Uber pattern in enterprise software. Promotional pricing and aggressive discounting in 2025-2026 may give way to significant price increases once vendors have achieved adoption targets.
  4. Factor in total cost of ownership, not just AI licensing. RAM prices have tripled. Energy costs are rising. The infrastructure that runs AI is getting more expensive even as the AI itself gets cheaper per query.
  5. Don’t lock into long-term contracts at today’s prices without flexibility clauses. The market is moving too fast. What seems like a good deal today may look expensive or constraining in 18 months.

The era of subsidized AI will end, just as the era of subsidized Uber rides ended. But unlike Uber, the technology underneath AI is getting exponentially cheaper. The question isn’t whether you’ll pay more or less โ€” it’s whether the value you get will justify whatever the real price turns out to be.


Sources