Broadcom has committed to supplying Anthropic with 3.5 gigawatts of Google TPU capacity starting in 2027, extending a massive AI infrastructure partnership that includes networking components through 2031 and positions Broadcom as the silicon implementation partner for two of America's three largest frontier model developers.
Broadcom has announced a massive expansion of its AI infrastructure partnership with Anthropic, committing to supply 3.5 gigawatts of Google TPU capacity starting in 2027 through a new supply assurance agreement that extends through 2031. The disclosure came in a Monday securities filing that outlines a three-way collaboration routing Google-designed TPUs to Anthropic via Broadcom's networking and component supply chain.
This new capacity builds on an existing 1 GW arrangement already scheduled to come online in 2026 under a Google Cloud agreement announced last October. The expanded commitment positions Broadcom as a critical infrastructure partner for Anthropic's growth trajectory, which the company says has reached an annualized revenue run rate exceeding $30 billion—up from approximately $9 billion at the end of 2025.
The Technical Partnership Structure
The Monday filing covers two distinct but linked arrangements. First, Broadcom will provide networking and other components for Google's next-generation AI racks through 2031. Second, the expanded collaboration routes Google-designed TPUs to Anthropic as part of a multi-gigawatt commitment for next-generation TPU-based compute.
The vast majority of this new infrastructure will be located in the United States, extending Anthropic's $50 billion American AI infrastructure commitment made in November 2025. This domestic focus aligns with broader U.S. technology sovereignty initiatives and positions the partnership as a cornerstone of American AI infrastructure development.
Broadcom's Role as Silicon Implementation Partner
Google owns both the TPU architecture and software stack, with Broadcom functioning as the silicon implementation partner. This relationship involves converting Google's architectural designs into manufacturable ASIC layouts while supplying high-speed SerDes, power management, and packaging solutions. TSMC handles the actual fabrication of the chips.
This division of labor mirrors Broadcom's separate $10 billion custom silicon program with OpenAI, announced as a 10 GW co-development effort last October. The parallel partnerships make Broadcom the implementation layer for two of the three largest U.S. frontier model developers, creating a unique position in the AI infrastructure ecosystem.
Anthropic's Explosive Growth Trajectory
Anthropic's revenue growth has been remarkable, with the company reporting that its annualized revenue run rate has now passed $30 billion. This represents a tripling of revenue in less than a year, up from around $9 billion at the end of 2025. The company also revealed that more than 1,000 business customers are spending over $1 million annually on its services, doubling the figure from February 2025.
"This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base," said Krishna Rao, Anthropic's chief financial officer, in a blog post announcing the partnership.
The Broader AI Infrastructure Landscape
Despite these massive custom silicon commitments, both Anthropic and OpenAI continue to rely heavily on Nvidia GPUs through major cloud providers including AWS, Google Cloud, and Microsoft Azure. OpenAI has also committed to 6 GW of AMD GPU capacity, with the first gigawatt expected in the second half of this year.
Amazon Web Services remains Anthropic's primary cloud and training partner under Project Rainier, the Trainium 2-based supercluster in Indiana. The new Google-Broadcom capacity sits alongside that arrangement rather than replacing it, creating a multi-provider infrastructure strategy that reduces dependency on any single vendor.
Market Impact and Financial Projections
Analysts at Mizuho, led by Vijay Rakesh, estimate that Broadcom will record $21 billion in AI revenue from Anthropic in 2026 and $42 billion in 2027. These projections were published in a note following Broadcom's March earnings call, though the SEC filing itself didn't contain specific revenue amounts.
The scale of these commitments—totaling tens of gigawatts across multiple partnerships—represents one of the largest infrastructure buildouts in the history of the semiconductor industry. The 3.5 GW commitment to Anthropic alone represents enough compute capacity to power millions of AI inference operations simultaneously.
Technical Implications for AI Development
The partnership has significant technical implications for the AI development ecosystem. By securing guaranteed access to custom silicon through 2031, Anthropic gains predictability in its infrastructure planning that's increasingly rare in the rapidly evolving AI landscape. The multi-year commitment also enables deeper optimization between Anthropic's model architectures and Google's TPU designs.
The networking and component supply agreement ensures that the full stack—from silicon to system interconnect—is optimized for Anthropic's workloads. This holistic approach to infrastructure design could provide performance advantages over more fragmented AI infrastructure strategies.
Strategic Positioning in the AI Race
Broadcom's position as the implementation partner for both Anthropic and OpenAI creates interesting dynamics in the competitive AI landscape. While the company maintains strict confidentiality around each partnership, the technical expertise gained from working with multiple leading AI developers could accelerate innovation across its custom silicon programs.
The geographic concentration of infrastructure in the United States also has strategic implications, particularly as governments worldwide scrutinize the concentration of AI capabilities and infrastructure. By building out domestic capacity at this scale, Anthropic and its partners are creating a foundation for continued U.S. leadership in frontier AI development.
Looking Ahead: The 2027 Timeline
The 2027 start date for the expanded TPU capacity gives Anthropic time to optimize its models and infrastructure for the new hardware generation. This timeline also aligns with expected advances in AI model capabilities and the scaling laws that have driven the industry's exponential growth.
As the AI industry continues to push the boundaries of what's possible with large language models and other frontier AI systems, partnerships like this one between Broadcom, Google, and Anthropic will likely become increasingly common. The combination of custom silicon, guaranteed capacity, and long-term planning horizons represents a mature approach to AI infrastructure development that could define the next phase of the industry's evolution.
The scale and scope of these commitments—spanning gigawatts of compute, billions in revenue, and multiple years of guaranteed supply—underscore just how central infrastructure has become to the AI race. In an industry where compute capacity often determines competitive advantage, securing guaranteed access to custom silicon through 2031 could prove to be a decisive strategic move for Anthropic as it continues its rapid growth trajectory.

Comments
Please log in or register to join the discussion