Cisco enters the 102.4T switch arena with its Silicon One G300, targeting AI clusters with unique congestion controls and P4 programmability while introducing power-efficient optics.
![]()
Cisco has launched its Silicon One G300 switch silicon, a 102.4 terabit-per-second (Tbps) application-specific integrated circuit (ASIC) designed to power next-generation AI networking infrastructure. Positioned as a direct competitor to Broadcom's Tomahawk 6 and Nvidia's Spectrum-X, the G300 arrives as hyperscalers and enterprises seek higher bandwidth and efficiency to support massive GPU clusters. With AI training workloads demanding unprecedented network scale, Cisco claims architectural innovations in congestion management and programmability give it an edge.
Technical Specifications and Scaling Advantages
The G300 integrates 512 serializer/deserializer (SerDes) lanes operating at 200 Gbps each. This enables flexible port configurations:
- 128 ports at 800 Gbps
- 64 ports at 1.6 Tbps
- 32 ports at 1.6 Tbps with breakout to 128x 400 Gbps
Such high radix reduces network complexity for AI deployments. A cluster connecting 128,000 GPUs previously required approximately 2,500 switches but now needs only 750 with the G300, lowering cost and potential failure points. When aggregated, the chip supports 1.6 Tbps OSFP ports—matching top-tier competitors. However, Cisco differentiates through its congestion mitigation and software-defined features.
Performance Benchmarks: Tackling AI Network Congestion
Cisco's "collective networking engine" uses a fully shared packet buffer and path-based load balancing to manage traffic bursts common in distributed AI training. Unlike traditional packet-spraying (used by Broadcom and Nvidia), which floods multiple paths indiscriminately, the G300 monitors flow-level congestion across the entire fabric. It dynamically reroutes traffic based on real-time telemetry from all connected switches.
According to Cisco, this approach achieves:
- 33% higher link utilization compared to packet-spraying
- 28% reduction in training job completion times
- Lower tail latency during all-to-all communication phases
While vendor benchmarks require independent validation, Cisco's focus on adaptive load balancing addresses a critical pain point in GPU clusters, where congestion can cascade across thousands of nodes.
Programmability and Power Efficiency
The G300 supports P4 programmability, allowing operators to redefine forwarding behavior via software updates. This extends hardware lifespan—new features like Ultra Ethernet Consortium protocols can be added without replacing silicon. AMD's Pensando NICs use similar P4 flexibility for late-binding feature deployment.
For power efficiency, Cisco introduced 800G Linear Pluggable Optics (LPO). These eliminate the onboard DSP and retimer, relying instead on the G300's signal processing. Combined with the switch hardware, LPO reduces system power by ~30% versus traditional DSP-based optics. Though Cisco didn't disclose exact transceiver wattage, each LPO likely consumes 5-10W—a major saving in dense deployments. The company also offers 1.6T pluggables for maximum throughput.
Notably, Cisco hasn't committed to copackaged optics (CPO), unlike Nvidia and Broadcom. CPO integrates lasers directly into the switch package for further power reductions but adds complexity. Cisco Fellow Rakesh Chopra stated the technology is under evaluation for "business alignment."
Build Recommendations and Availability
The G300 will ship in Cisco's N9000 series switches and 8000 series routers starting late 2026. These systems support 64x 1.6T OSFP ports, ideal for spine layers in AI fabrics. For long-distance interconnects between data centers, Cisco also expanded availability of its 51.2T Silicon One P200 routing chip, capable of linking clusters up to 1,000 km apart.
Implementation considerations:
- Deploy G300-based switches as aggregation nodes for GPU pods exceeding 8,000 accelerators
- Use LPO optics for leaf-spine links where power savings outweigh cost premiums
- Leverage P4 programmability to future-proof against evolving AI communication protocols
Cisco's entry intensifies competition in merchant switch silicon. With AI networks consuming megawatts, the G300's combination of throughput, adaptive congestion control, and power-optimized optics offers a compelling alternative for operators scaling beyond 100,000 GPUs.
Comments
Please log in or register to join the discussion