Meta Compute: Zuckerberg's Nuclear-Powered Bet on a Gigawatt-Scale AI Future
#Infrastructure

Meta Compute: Zuckerberg's Nuclear-Powered Bet on a Gigawatt-Scale AI Future

Hardware Reporter
7 min read

Meta has formed 'Meta Compute,' a new division tasked with building out hundreds of gigawatts of AI datacenter capacity, backed by massive nuclear power deals to fuel the expansion.

Mark Zuckerberg is done playing small. In a move that signals a fundamental shift in how Meta approaches infrastructure, the company has formed a new internal division called "Meta Compute" designed to orchestrate what Zuckerberg calls a plan to build "tens of gigawatts this decade, and hundreds of gigawatts or more over time." This isn't just another datacenter project; it's a wholesale reimagining of the company's physical footprint, driven by an insatiable need for compute power to fuel the next generation of AI models.

Featured image

The Meta Compute Mandate

The formation of Meta Compute represents a maturation of the company's infrastructure strategy. Rather than treating datacenters as a necessary cost of doing business, Meta is now positioning them as a core strategic advantage. As Zuckerberg outlined in his Threads post, the goal is to engineer, invest, and partner in ways that create a competitive moat in the AI race.

The division will be co-led by two executives with complementary expertise:

  • Santosh Janardhan continues his role as head of global infrastructure, overseeing the technical architecture, software stack, silicon program, developer productivity, and the physical operation of the global datacenter fleet.
  • Daniel Gross, who joined Meta's Superintelligence team in mid-2025, will lead a new group focused on long-term capacity strategy, supplier partnerships, industry analysis, planning, and business modeling.

Both will work closely with Dina Powell McCormick, recently hired as President and Vice Chairman, who brings 16 years of Goldman Sachs experience and political advisory credentials to the table. Her role is particularly critical: she'll be "partnering with governments and sovereigns to build, deploy, invest in, and finance Meta's infrastructure." This signals that Meta recognizes the geopolitical complexity of building planet-scale compute infrastructure and is bringing in heavy artillery to navigate the regulatory and financial landscape.

The Power Problem

The formation of Meta Compute comes as the company confronts a stark reality: AI datacenters are power-hungry beasts, and the grid isn't keeping up. Meta's solution is to go straight to the source of reliable, carbon-free baseload power: nuclear.

Last week, Meta signed three new long-term nuclear energy contracts with TerraPower, Oklo, and Vistra. Combined with existing commitments to Constellation Energy, Meta has now secured approximately 6.6 gigawatts of atomic power capacity. To put that in perspective, 6.6 gigawatts is enough to power roughly 5 million homes, or the entire state of Maryland.

This nuclear-first strategy reflects a hard-nosed assessment of renewable limitations. While solar and wind have made incredible progress, they can't provide the 24/7 reliability that AI training and inference demand. Nuclear offers the baseload capacity Meta needs, with the added benefit of being carbon-neutral, which helps with the company's sustainability commitments.

The nuclear deals also reveal Meta's long-term thinking. These are multi-decade commitments that lock in power prices and capacity, effectively insulating the company from energy market volatility. For a company planning to spend $72 billion in capital expenditure for fiscal 2025 alone, securing predictable energy costs is a hedge against inflation and supply shocks.

The AI Ambition Behind the Infrastructure

All this infrastructure spending is in service of Zuckerberg's stated goal: delivering "personal superintelligence" to the masses. But the road to superintelligence is paved with compute, and Meta's current AI efforts have hit some turbulence.

The company's open-source Llama 4 models received a lackluster reception, failing to match the performance of competitors like OpenAI's GPT-4 or Anthropic's Claude. The departure of machine learning pioneer Yann LeCun, who stepped back from his chief AI scientist role, raised questions about Meta's research direction. Meanwhile, Meta has been engaged in a talent war with OpenAI, losing key researchers to the ChatGPT creator.

In response, Meta appears to be pivoting strategy. Reports suggest Zuckerberg has abandoned the Llama roadmap in favor of proprietary models codenamed "Avocado" and "Mango." This represents a significant departure from the company's previous open-source-first approach to AI development.

Yet the company continues to release some models openly, like the Segment Anything Series (SAM) for image segmentation. This mixed strategy suggests Meta is trying to balance the benefits of open research with the competitive advantages of proprietary technology.

The Scale of Ambition

Meta's current datacenter construction projects already span gigawatt-scale facilities across Ohio, Louisiana, and Texas, with additional locations in the pipeline. The formation of Meta Compute suggests these are just the beginning.

The "hundreds of gigawatts" target is staggering. Consider that the entire global datacenter industry currently consumes roughly 20-30 gigawatts of power. Meta alone wants to multiply that by an order of magnitude. This isn't just building more datacenters; it's fundamentally reshaping global energy and compute infrastructure.

The scale requires unprecedented partnerships. McCormick's role in negotiating with governments and sovereigns highlights the complexity: building gigawatt-scale datacenters requires not just power, but land, water, cooling resources, fiber connectivity, and regulatory approvals across multiple jurisdictions.

The Competitive Context

Meta's infrastructure push comes as the entire tech industry races to build AI capacity. Microsoft and OpenAI are reportedly planning a $100 billion datacenter project. Amazon is investing billions in AWS AI infrastructure. Google continues expanding its global footprint.

But Meta's approach is distinct in several ways:

  1. Vertical Integration: By controlling the entire stack from silicon to software to facilities, Meta aims for efficiency gains that competitors might miss.
  2. Nuclear Focus: While other companies are pursuing renewables, Meta is going all-in on nuclear as the foundation of its power strategy.
  3. Sovereign Partnerships: The emphasis on government partnerships suggests Meta sees infrastructure as a geopolitical asset, not just a technical one.

The Stakes

The formation of Meta Compute is a bet that AI infrastructure is the next great competitive moat. Zuckerberg is essentially saying that whoever controls the most reliable, scalable, cost-effective compute power will win the AI race.

The $72 billion capital expenditure forecast for 2025 is just the start. If Meta truly plans to build hundreds of gigawatts of capacity over the coming decades, we're talking about trillions of dollars in investment. That's not just a corporate strategy; it's a national-scale industrial project.

The nuclear deals are the linchpin. Without reliable, carbon-free power at massive scale, the compute ambitions collapse. Meta's partnerships with TerraPower, Oklo, Vistra, and Constellation represent the company's recognition that energy is the ultimate constraint on AI growth.

What Comes Next

The success of Meta Compute will depend on execution. Building datacenters is hard; building them at gigawatt scale is unprecedented. The company will need to:

  • Navigate complex regulatory environments for nuclear power and datacenter construction
  • Manage supply chains for everything from GPUs to cooling systems to concrete
  • Recruit and retain the engineering talent to design and operate these facilities
  • Balance the massive capital requirements with shareholder expectations
  • Maintain social license to operate in communities affected by these massive projects

Zuckerberg's Threads post was more than a corporate announcement; it was a declaration of intent. Meta is no longer content to rent compute from others or build at modest scale. The company wants to own the infrastructure of the future, and it's willing to spend whatever it takes to get there.

The question isn't whether Meta can build this infrastructure. With $72 billion in annual capex and nuclear power contracts already signed, the money and energy are secured. The question is whether the AI models running on this infrastructure will deliver enough value to justify the investment.

If Meta's "personal superintelligence" vision materializes, this infrastructure could power the next generation of human-computer interaction. If it doesn't, Meta will have built the world's most expensive stranded assets.

Either way, the formation of Meta Compute marks a turning point. The AI race just became an infrastructure race, and Meta is betting it can out-build, out-power, and out-partner the competition. The planet will be paved with datacenters, powered by atoms, and controlled by a company that once just wanted to connect people. Now it wants to compute for them, at a scale that redefines what's possible.

The gigawatts are coming. The nuclear reactors are spinning up. And Zuckerberg is all in.

Comments

Loading comments...