Nvidia is betting billions on optical interconnects to overcome copper's physical limits, enabling massive AI systems with over 1,000 GPUs connected seamlessly.
Nvidia is making a multibillion-dollar bet on optical interconnects to overcome the physical limitations of copper wiring in its quest to build ever-larger AI systems. The GPU giant's move toward photonics represents a fundamental shift in how massive AI clusters will be built and connected in the coming years.

The copper ceiling
For years, copper interconnects have been the backbone of Nvidia's GPU systems, offering cost-effective, reliable connections with zero power consumption for the cabling itself. But copper has a critical weakness: signal degradation. At the 1.8 TB/s speeds required for modern AI workloads, copper cables can only stretch a few feet before the signal becomes unreliable.
This limitation forced Nvidia to cram as many GPUs as possible into single racks. The company's current flagship, the Grace Blackwell NVL72, uses a copper backplane with miles of cables to make 72 GPUs behave like one enormous AI accelerator. The NVSwitches that coordinate these GPUs are all positioned in the center of the rack specifically because the copper runs are that short.
The power problem
When Nvidia first considered optical interconnects for the NVL72, the power consumption was staggering. Each Blackwell GPU would have required eighteen 800 Gbps pluggable optics modules - nine for the accelerator and nine for the switch. While individual pluggables only consume 10-15 watts, multiplying that across 72 GPUs would have added 20,000 watts of power consumption to the system.
Co-packaged optics to the rescue
Recent advancements in co-packaged optics (CPO) have changed the equation dramatically. By integrating optical engines directly alongside switch ASICs, CPO dramatically reduces power consumption and the number of pluggable modules required. Nvidia began embracing CPO in 2025 by integrating it into its Spectrum Ethernet and Quantum InfiniBand switches.
Vera Rubin: The hybrid approach
Nvidia's upcoming Vera Rubin generation will use a hybrid approach, combining copper and optical interconnects. The first layer of the network will use copper interconnects within the rack, meaning no changes to the GPUs themselves. The second spine layer will use pluggable modules to connect multiple racks together.
This approach allows Nvidia to scale up to 576 GPUs across multiple racks while maintaining the benefits of copper for short-distance connections. The company is essentially building a two-tier fat tree topology, with 72 ASICs in the spine layer.
Feynman: The optical future
Where things get really interesting is with Nvidia's Feynman generation, scheduled to ship in mid-to-late 2028. These systems will be available with either copper or co-packaged optical NVLink interconnects, potentially enabling systems with over 1,000 GPUs.
Nvidia is considering two main approaches for Feynman. The simpler option would integrate CPO into the NVLink switch ASIC while continuing to use copper interconnects within the rack. This would require a two-tier NVSwitch fabric and multiple switch ASICs.
The more ambitious approach would integrate CPO into both the switch and the GPU package itself. This would reduce the fabric to a single tier but would likely require multiple Feynman GPU SKUs - one with optics and one without.
Supply chain bets
Nvidia's optical ambitions require a robust supply chain for laser modules and optical components. Last month, the company invested $4 billion ($2 billion each) in Coherent and Lumentum, both specialists in optical lasers. Earlier this week, Nvidia announced a $2 billion partnership with Marvell to develop optical I/O technologies and integrate NVLink Fusion into custom XPUs.
These investments suggest Nvidia is serious about making optical interconnects a core part of its infrastructure. The company's moves also align with broader industry trends, as competitors like Lightmatter and Ayar Labs are developing their own photonics solutions.
Why it matters
The shift to optical interconnects isn't just about building bigger systems - it's about building more efficient ones. As AI models continue to grow in size and complexity, the ability to connect thousands of GPUs with minimal latency becomes crucial. Optical interconnects offer the bandwidth and reach that copper simply cannot provide at scale.
For the AI industry, Nvidia's optical push could accelerate the development of even larger and more powerful models. For data centers, it represents a significant infrastructure investment but one that could pay dividends in performance and efficiency. And for the photonics industry, it validates years of research and development, potentially opening up new markets and applications beyond AI infrastructure.
Nvidia's journey from copper to optics mirrors the broader evolution of computing infrastructure. Just as the industry moved from copper to fiber for long-distance networking, it's now making the same transition for high-performance computing. The question isn't whether optical interconnects will become standard in AI systems, but how quickly the industry can overcome the remaining technical and economic hurdles.

Comments
Please log in or register to join the discussion