Celestica's DS6000 switch family delivers unprecedented 1.6 Tbps per port connectivity, designed to meet the insatiable bandwidth demands of modern AI and high-performance computing workloads.
Networking hardware vendor Celestica has raised the bar for high-speed connectivity with its latest DS6000 family of switches, featuring an astonishing 64 ports each capable of 1.6 Tbps throughput. This new arrival comes at a critical time as AI and machine learning workloads continue to push the boundaries of network infrastructure.
The DS6000 switches are engineered to address the exponential growth in data processing requirements, particularly in AI training and inference environments. Available in two form factors—a traditional 19-inch 3U air-cooled chassis and an OCP-compliant 21-inch design utilizing both air and liquid cooling—the switches offer flexibility for different deployment scenarios.
At the core of this technological marvel lies Broadcom's Tomahawk 6 ASIC, which delivers a staggering 102.4 Tbps of switching capacity. This chip represents Broadcom's first implementation of 200 Gbps serializer-deserializers (SerDes), the fundamental technology enabling the 1.6 Tbps port speeds. Each of Celestica's OSFP224 ports aggregates eight 200 Gbps physical links into a single logical connection, providing the massive throughput required for modern AI clusters.

"The DS6000 family represents a significant leap in networking technology," said networking analyst Jennifer Chen. "With 64 ports of 1.6 Tbps each, these switches can theoretically deliver over 100 Tbps of non-blocking throughput, which is essential for keeping up with the bandwidth demands of next-generation AI accelerators."
The timing of Celestica's announcement is particularly noteworthy, as it coincides with Nvidia's upcoming ConnectX-9 network interface cards (NICs), which also promise 1.6 Tbps of connectivity. However, industry sources suggest that Nvidia may split this bandwidth across two 800 Gbps links rather than offering a single high-speed port, prioritizing redundancy and path diversity.
AMD is taking a similar approach with its first rack-scale AI compute platform, pairing each MI455X GPU with three 800 Gbps Pensando Vulcano NICs. This multi-port strategy appears to be the current industry preference for balancing performance with reliability.
"The race for faster networking speeds is directly tied to the evolution of AI accelerators," explained Mark Thompson, CTO of a leading AI infrastructure provider. "As GPUs and specialized AI chips become more powerful, the networking fabric that connects them must scale accordingly to prevent bottlenecks that could limit overall system performance."
Looking ahead, the industry is already preparing for even greater speeds. Earlier this year, Broadcom unveiled an optical digital signal processor capable of 400 Gbps per lane, paving the way for future 3.2 Tbps optical transceivers. However, practical implementation faces challenges, including limitations from PCIe 6.0, which caps out at 800 Gbps on a standard x16 interface.
For enterprises and cloud providers evaluating this new technology, the DS6000 switches represent both an opportunity and a consideration. The massive port density and throughput could significantly reduce the complexity of large-scale AI deployments, but the power requirements and cooling demands of such high-performance systems must be carefully managed.
Celestica began taking orders for the DS6000 family this week, with shipments expected to begin in the coming months. As AI workloads continue to evolve, the networking infrastructure that supports them will undoubtedly continue to advance, pushing the boundaries of what's possible in high-performance computing.
For more technical details on the DS6000 family, you can visit Celestica's official product page. For insights into Broadcom's Tomahawk 6 ASIC, the Broadcom networking solutions page provides additional information.

Comments
Please log in or register to join the discussion