SpaceX has agreed to lease its entire Colossus 1 supercomputer to AI rival Anthropic, granting access to over 222,000 Nvidia GPUs and 300 megawatts of compute power for Claude service upgrades. The deal includes plans to explore multi-gigawatt orbital data centers, as terrestrial power and land constraints limit on-ground AI infrastructure expansion.
{{IMAGE:1}}
Anthropic and SpaceXAI announced a compute leasing agreement on Wednesday that gives Anthropic full access to the Colossus 1 supercomputer, originally built to train xAI’s Grok family of large language models. The deal includes over 222,000 Nvidia GPUs spanning the H100, H200, and next-generation GB200 Blackwell accelerator systems, plus more than 300 megawatts of total compute power.
Immediate changes for Anthropic’s paid Claude users took effect yesterday. Claude Code five-hour rate limits doubled across all paid tiers, including Pro, Max, Team, and Enterprise subscriptions. Peak hour limit reductions for Pro and Max subscribers were removed entirely. API rate limits for Claude Opus models, which govern the volume of requests third-party developers can make, were raised considerably. Details of these service changes were first reported by Etiido Uko, a technical writer and engineer with over nine years of experience in hardware documentation. {{IMAGE:3}}
Anthropic noted the SpaceX deal joins similar agreements with Amazon, Google, and Microsoft, all aimed at building gigawatts of additional AI compute capacity. The company also expressed interest in partnering with SpaceX on multi-gigawatt orbital data center deployments, citing that the compute required to train and operate next-generation AI systems is outpacing terrestrial power, land, and cooling availability on required timelines.
Elon Musk confirmed the lease approval in a post on X, stating he met with senior Anthropic leadership to evaluate the company’s safeguards for ensuring Claude benefits humanity. He noted no team members triggered his "evil detector," a stark reversal of comments made earlier this year where he described Claude as "misanthropic and evil." xAI will shift focus to building Colossus 2, its next-generation cluster, now that Colossus 1 is fully leased to a direct AI competitor.
Technical Specifications and Supply Chain Context
The Colossus 1 cluster is among the largest AI training infrastructures ever deployed, with a total IT power draw of 300 megawatts. For context, Meta’s largest publicly disclosed AI cluster as of 2024 uses approximately 160 megawatts of IT power, making Colossus 1 nearly twice as large. The 300 megawatt figure equals the power consumption of roughly 230,000 average US households, or 0.08% of total US annual electricity generation.
All 222,000 Nvidia GPUs in the cluster are fabricated on TSMC’s 4nm N4 process node using CoWoS-S advanced packaging, which stacks high-bandwidth memory directly on top of the GPU die to minimize latency. The H100 GPUs, which make up the largest portion of the cluster, feature 80GB of HBM3 memory, a 700W thermal design power (TDP), and peak FP16 performance of 1979 teraflops. H200 units add 141GB of HBM3e memory, a 76% increase over the H100, while maintaining the same 700W TDP and peak FP16 performance. Next-generation GB200 Blackwell accelerators use a dual-GPU B200 configuration paired with a Grace CPU, delivering 20 petaflops of FP8 performance per superchip, 192GB of HBM3e memory per GPU, and a 1000W TDP per B200 unit. References to Nvidia’s product lines are available at their respective pages: H100, H200, Blackwell GB200. TSMC’s CoWoS packaging technology details can be found on the foundry’s official site: TSMC Advanced Packaging.
Supply chain constraints for these GPUs are severe. TSMC’s CoWoS advanced packaging capacity is fully booked through 2025, with Nvidia accounting for 60% of allocated supply. Securing 222,000 GPUs would have required SpaceX to place orders in early 2023, when H100 lead times exceeded 11 months and spot prices were 300% above list. Leasing the entire cluster to Anthropic shifts approximately 15% of Nvidia’s 2024 H100/H200/GB200 supply allocation to a single customer, reducing availability for other AI labs including OpenAI, Google DeepMind, and Meta. GPU hardware depreciates at an average rate of 30% annually, so leasing Colossus 1 allows SpaceX to monetize the cluster rather than letting it sit idle during Colossus 2 buildout.
Cooling requirements for the cluster are equally massive. Assuming a power usage effectiveness (PUE) of 4.0, which is standard for large liquid-cooled AI clusters, the total facility power draw for Colossus 1 is 1.2 gigawatts. Cooling systems alone consume 900 megawatts of that total, as they must dissipate heat from 222,000 high-TDP GPUs and associated networking hardware. The cluster is located in Memphis, Tennessee, where xAI secured access to dedicated substations to bypass local grid constraints. {{IMAGE:2}} (Image credit: xAI)
Market Implications and Competitive Dynamics
The deal marks a rare instance of AI competitors sharing core infrastructure, driven by acute compute scarcity. Anthropic gains immediate access to training and inference capacity without waiting 12 to 18 months for new GPU orders, closing a critical infrastructure gap with larger rivals. For xAI, the lease generates recurring revenue to fund Colossus 2, which is expected to use next-generation Rubin architecture GPUs and exceed 1 gigawatt of total power draw when complete.
Musk’s reversal on Anthropic highlights how compute shortages are overriding competitive rivalries in the AI sector. Earlier criticism of Claude as unsafe gives way to a partnership that benefits both parties: SpaceX monetizes existing hardware, while Anthropic accelerates model training and reduces user latency. The deal also signals a shift away from owned AI infrastructure toward flexible leasing models, as GPU supply chains remain volatile and hardware depreciates rapidly.
Orbital data center plans address a growing constraint for the AI industry. US data centers consumed 2% of total US electricity in 2022, a figure projected to rise to 8% by 2030 as AI training workloads grow. 70% of new US data center capacity is concentrated in Northern Virginia, Texas, and California, where local grids are already at 90% capacity. Orbital data centers bypass these constraints by using solar power for electricity and passive space cooling, which eliminates the need for energy-intensive chillers. SpaceX’s vertical integration, including Falcon 9 launch vehicles, Starlink satellite manufacturing, and ground station networks, gives it a unique advantage in deploying orbital infrastructure. Current Falcon 9 launch costs are $2,720 per kilogram, so launching 222,000 GPUs (each ~1.5kg, total ~333 metric tons) would cost approximately $906 million, plus satellite bus and power system expenses. Scaling to multi-gigawatt orbital clusters would require Starship launches, which aim to reduce per-kg costs to $100, making large-scale orbital compute economically viable.
Anthropic’s broader infrastructure strategy includes international expansion to Europe and Asia, targeting enterprise customers in healthcare, finance, and government that require local data residency. The company will prioritize partnerships in politically stable democratic countries with secure AI supply chains, and plans to offset rising electricity costs by reinvesting in host communities to secure long-term power purchase agreements. Electricity costs for data centers have risen 40% since 2020 due to natural gas price spikes, making community reinvestment a key strategy for securing affordable power.
Anthropic also unveiled a new Claude feature called "dreaming" alongside the compute deal, which allows AI agents to review past sessions, identify recurring errors, and reorganize memory files between interactions. The feature increases per-session compute usage by an estimated 12%, making the additional Colossus 1 capacity critical to maintaining response times for paid users. More details on Claude are available at Anthropic’s official site and Claude’s user portal.
Outlook
Compute leasing between AI rivals is likely to become more common as supply chain constraints persist through 2026. Orbital data center partnerships will test whether space-based infrastructure can solve terrestrial power and land bottlenecks, with SpaceX and Anthropic as first movers. For the semiconductor supply chain, the deal confirms that Nvidia’s GPU allocation remains the single largest bottleneck for AI progress, with cluster leasing emerging as a way to optimize hardware utilization across the industry.
Comments
Please log in or register to join the discussion