Major tech companies unite to develop optical interconnects for AI infrastructure, addressing copper limitations as models scale toward super intelligence.
The race to scale AI infrastructure has reached a critical juncture, with traditional copper interconnects hitting physical limits just as models push toward super intelligence. Today, AMD, NVIDIA, Broadcom, Meta, Microsoft, and OpenAI announced the formation of the Optical Compute Interconnect (OCI) Multi-Source Agreement (MSA) consortium, a collaborative effort to develop optical scale-up interconnects for next-generation AI clusters.

The Copper Ceiling
As large language models advance, the physical constraints of copper-based connectivity are becoming increasingly problematic. The OCI consortium identifies several key limitations:
- Physical reach restrictions limiting AI cluster scale-up domain architectures
- Bandwidth density bottlenecks constraining GPU-to-GPU communication
- Power inefficiencies at scale that undermine system performance
These constraints are particularly acute as AI workloads demand ever-greater inter-node communication bandwidth while maintaining low latency.
Optical Compute Interconnect Architecture
The OCI specification represents a fundamental shift in interconnect design philosophy. Rather than continuing the module-centric approach of traditional networking hardware, OCI moves toward a silicon-centric model that enables tighter integration between optics and compute/networking silicon.
Key technical features include:
- Non-return to zero (NRZ) modulation combined with wavelength division multiplexing (WDM) optical technology
- Power, latency, and cost optimization targeting parity with copper solutions
- Silicon-centric integration enabling meaningful gains in bandwidth density
Performance Roadmap
The consortium has outlined a clear performance trajectory:
OCI GEN1: 4λ x 50Gbps NRZ (200Gbps/direction) OCI GEN2: 400Gbps/direction bidirectional (BiDi) technology, achieving up to 800Gbps per fiber Future roadmap: Scaling to 3.2Tbps per fiber and beyond
This progression enables scale-up domains with both higher GPU counts and increased bandwidth per GPU, addressing the exponential growth in AI model complexity.
Form Factor Flexibility
The specification supports multiple deployment models to accommodate different system architectures:
- Pluggable optics for modular upgrades
- On-board optics for integrated designs
- Co-packaged optics (CPO) for maximum density
This flexibility ensures the technology can be adopted across diverse hardware platforms and use cases.
The Intel Question
Notably absent from the consortium is Intel, despite the company's significant investments in both AI hardware and optical interconnect technologies. This absence raises questions about potential fragmentation in the optical interconnect ecosystem, though the consortium's open specification approach may eventually accommodate additional participants.
Industry Implications
The formation of OCI represents more than just a technical specification—it signals a coordinated industry response to the scaling challenges facing AI infrastructure. By establishing a multi-vendor, open ecosystem, the consortium aims to prevent proprietary lock-in while accelerating the development of optical solutions.
For AI developers and data center operators, OCI promises:
- Greater scalability for training larger models
- Improved power efficiency at scale
- Reduced latency for distributed training workloads
- Cost optimization through standardized interfaces
The full OCI MSA technical specification is available at OCI-MSA.org for those interested in implementation details.

As AI models continue their march toward super intelligence, the infrastructure supporting them must evolve accordingly. The Optical Compute Interconnect consortium represents a significant step toward optical solutions that can meet the demanding requirements of next-generation AI workloads, potentially reshaping the data center landscape for years to come.

Comments
Please log in or register to join the discussion