Note: This post was written by Claude Opus 4.7. The following is a synthesis of Anthropic’s and xAI’s announcements, CNBC, Axios, and Simon Willison’s reporting.
On May 6, Anthropic and SpaceX announced that Anthropic will take all of the compute capacity at Colossus 1, the Memphis data center xAI built for itself. Three hundred megawatts. More than 220,000 NVIDIA GPUs spanning H100, H200, and the next-generation Blackwell accelerators. Online within a month. The same day, Anthropic doubled the five-hour rate limits on Claude Code Pro, Max, Team, and Enterprise plans, removed the peak-hour throttling that had been hitting paying customers in March and April, and “raised API rate limits considerably” for Opus models. Also the same day: Elon Musk testified, week 2, in his own federal lawsuit against Sam Altman over OpenAI’s mission. And Musk posted on X that no one at Anthropic had set off his “evil detector” โ a direct reversal of his February 2026 broadside calling Anthropic “evil” and accusing it of hating Western Civilization. Three months from “evil” to “fair terms and pricing.” The simplest explanation is the right one: capacity, not ideology, is now setting deal terms at the top of the AI stack.
What’s in the deal
The mechanics, confirmed by both Anthropic and xAI’s matching announcement:
- 300+ megawatts at Colossus 1 in Memphis, exclusive to Anthropic, online within a month of May 6
- More than 220,000 NVIDIA GPUs, mixed H100, H200, and next-generation GB200 Blackwell
- A separate, looser “interest” in partnering on multi-gigawatt orbital compute via SpaceX (no agreement, no timeline)
- Counterparty is “SpaceXAI,” the rebranded entity Musk announced the same day; xAI as a separate company is being dissolved into SpaceX
- Term length, exclusivity beyond Colossus 1, and dollar value: not disclosed
What is not in the deal: Colossus 2, xAI’s larger sibling facility, which stays with SpaceXAI for Musk’s own model training.
Why now
Anthropic CEO Dario Amodei told developers at the Code with Claude conference in San Francisco the same day:
“I hope that 80-times growth doesn’t continue because that’s just crazy and it’s too hard to handle… That is the reason we have had difficulties with compute. [We’re] working as quickly as possible to provide more [and will] pass that compute on to you as soon as we can.”
Eighty times. In one quarter. Versus a planned ten times. The annualized run rate that closed 2025 around $9 billion was past $30 billion by Q1 2026, ahead of OpenAI’s reported $24 billion at end of March. Eight of the Fortune 10 are paying enterprise customers. This is the demand side that produced the rate-limit pain โ the same pain Anthropic publicly described in April as “inevitable strain on our infrastructure” affecting “reliability and performance especially during peak hours.” Three hundred megawatts is the supply side answering it.
The seller has its own clock. SpaceX confidentially filed for IPO on April 1, with the public filing expected in June. As Axios’s Madison Mills and Ina Fried framed it, the deal lets Musk turn unused compute into revenue before the IPO. Musk’s February attack on Anthropic and his May rental to Anthropic are both compatible with one position: SpaceX needs revenue stories on the books before bankers walk it through pricing, and Anthropic is the deepest-pocketed lab without an exclusive relationship with a single hyperscaler.
What changed in the API
The day-of product announcement is the part the buyer sees:
- Five-hour rate limits doubled on Claude Code Pro, Max, Team, and seat-based Enterprise plans
- Peak-hour limit reductions removed
- API rate limits “raised considerably” for Opus models โ no published per-token price changes, no context-window changes
Bedrock pricing did not move. Sonnet 4.6 stayed at $3 / $15 per million tokens, Opus 4.6 at $5 / $25, Haiku 4.5 at $1 / $5. The compute did not become cheaper โ it became more available. That is the more useful read.
What it means for IT
Three things, none of them new categories.
Fewer 429s, more defensible SLAs. The peak-hour throttling that hit paying customers in March-April was the load-bearing reason “we have Claude in production but cannot promise availability” was an honest IT statement. Doubling the limits and removing peak-hour reductions does not fix that overnight, but it changes what is technically true to write into a vendor SLA exhibit. For shops that already have Claude flowing through Microsoft 365 or Foundry, this is the upstream supply event that lets the downstream commitments actually hold.
The compute moat hardened, again. Three hundred megawatts here is incremental on top of five gigawatts each from Amazon and Google, the thirty billion in Azure compute committed last November, and the Anthropic-Google-Broadcom five-gigawatt deal coming online in 2027. The frontier-AI buyer base has narrowed to a small number of labs that can write capacity contracts at this scale. That concentration is the structural risk; it does not show up on a vendor questionnaire.
Vendor identity continues to matter less than capacity identity. Musk’s “evil detector” line is the cleanest expression of how vendor relationships are sorting in 2026. The companies who have to do business with each other to keep their models running will, regardless of public posture or active litigation. The relevant procurement question for an enterprise IT shop is no longer “which lab” โ it is “which underlying capacity,” because that is the layer where the binding constraint is moving and where the next few cycles of price compression and rate-limit relief will originate.
Bottom line
The deal that read as an Elon-versus-Dario plot twist on the morning of May 6 reads, after the operating numbers are added, as a straightforward compute-as-commodity transaction: SpaceX needed revenue ahead of an IPO, Anthropic needed capacity ahead of demand, and 300 megawatts of GPUs that were going underused in Memphis went to the lab that could pay for them. The downstream effect for buyers is more headroom on Pro and Max plans, looser API limits on Opus, and a more credible reliability story for Bedrock and Foundry deployments. The downstream effect for the industry is one more confirmation that capacity is now the binding constraint at the frontier โ and that compute, not ideology, is what is shaping the headlines.
Sources
- Anthropic - Higher usage limits for Claude and a compute deal with SpaceX
- xAI - New Compute Partnership with Anthropic
- CNBC - Anthropic, SpaceX announce compute deal that includes space development
- CNBC - Anthropic CEO says 80-fold growth in first quarter explains ‘difficulties with compute’
- Axios - How Elon grew to love Anthropic
- Simon Willison - xAI / Anthropic
