Oracle announces an unprecedented 18 zettaFLOPS AI infrastructure rollout using 800,000 Nvidia Blackwell and 50,000 AMD MI450X GPUs, marking one of history's largest compute deployments. The clusters will leverage Nvidia's Spectrum-X networking and AMD's open UALink architecture, with OpenAI positioned as a key beneficiary. This massive scaling raises critical questions about practical FP4 utilization and the staggering energy demands of next-gen AI.
Oracle has unveiled plans to deploy over 18 zettaFLOPS of AI computing power by late 2026, combining 800,000 Nvidia Blackwell GPUs and 50,000 AMD Instinct MI450X accelerators in its cloud infrastructure. This colossal deployment—equivalent to 18 billion billion floating-point operations per second—signals an aggressive push into hyperscale AI territory.
Nvidia’s Blackwell Powerhouse
Nvidia’s 800,000-GPU cluster, part of Oracle Cloud Infrastructure’s Zettascale10 offering, will deliver 16 zettaFLOPS of sparse FP4 performance. Beyond hardware, Oracle is adopting Nvidia’s full stack: Spectrum-X Ethernet for networking and AI enterprise software suites. This represents a major endorsement for Nvidia’s Spectrum-X platform at unprecedented scale.
AMD’s Open Alternative
AMD’s 50,000 MI450X GPUs will be configured in Helios racks—72 accelerators per unit—using the open Ultra Accelerator Link (UALink) standard as an alternative to Nvidia’s proprietary NVLink.
Each Helios rack delivers 2.9 exaFLOPS FP4 and 31TB of HBM4 memory, rivaling Nvidia’s Vera Rubin systems in raw throughput.
The Precision Paradox
While the zettaFLOP figures are staggering, practical utilization remains complex:
- FP4 limitations: Traditionally used for inference storage, FP4 lacks broad adoption for training, where BF16/FP8 dominate.
- Scalability challenges: Few organizations can partition workloads across 50,000+ GPUs effectively. Nvidia’s recent research suggests FP4 training viability, but real-world model accuracy remains unproven at this scale.
OpenAI’s Central Role
OpenAI emerges as a primary beneficiary, with both Nvidia and AMD securing deals contingent on large-scale deployments. Oracle’s rollout aligns with AMD’s agreement to supply accelerators supporting 6 gigawatts of compute—with the 50,000-GPU cluster representing just the first phase.
The Energy Equation
This infrastructure demands extraordinary power:\
"We're all going to be paying AI's Godzilla-sized power bills"
Analysts note Oracle may need $25B+ annual borrowing to fund such deployments—highlighting the unsustainable economics without breakthrough efficiency gains.
Source: The Register

Comments
Please log in or register to join the discussion