The ASRock Rack 4UXGM-GNR2 CX8 4U server implements NVIDIA's MGX architecture with ConnectX-8 PCIe switching, enabling 400Gbps network bandwidth per GPU alongside 16 E1.S SSDs and specialized cooling for AI workloads.

The ASRock Rack 4UXGM-GNR2 CX8 represents a significant evolution in GPU server design through its implementation of NVIDIA's MGX architecture with ConnectX-8 PCIe switching. This 4U platform measures 800mm (31.5") deep and redefines GPU-to-network connectivity paradigms for AI and HPC workloads.
Physical Architecture Overview
The front panel features a 16-drive E1.S SSD array, consolidating storage that would traditionally occupy ~2U of faceplate space into a compact form factor. E1.S drives provide high-density storage while preserving airflow for critical components.
Beneath the storage array, five hot-swappable fan modules occupy the lower 2U of the front panel, providing dedicated cooling for the eight NVIDIA RTX Pro 6000 Blackwell GPUs housed internally. 
Networking Revolution via ConnectX-8
The rear I/O reveals the architectural breakthrough: four NVIDIA ConnectX-8 NICs integrated via a PCIe switch board. Each NIC provides QSFP112 400Gbps ports (NVIDIA ConnectX-8 specifications), delivering:
- 400Gbps dedicated bandwidth per GPU
- Compatibility with NVIDIA Spectrum-4 switches like the SN5610
- Support for breakout configurations using NVIDIA 800G DAC cables
This implementation eliminates traditional PCIe slot constraints, providing direct 400Gbps pathways equivalent to PCIe Gen5 x16 bandwidth per GPU.
Management and Control Plane
A dedicated NVIDIA BlueField-3 DPU handles north-south traffic with:
- 400Gbps aggregate throughput
- Security and provisioning capabilities (BlueField-3 documentation)
- Separate 1GbE ports via Intel i350 controller for OS management
Power and Thermal Design
The system employs four 3.2kW 80Plus Titanium power supplies, necessitating 208-240V power infrastructure. The five hot-swap fan modules implement a zone-based cooling strategy optimized for GPU thermal profiles.
Deployment Considerations
- Network Topology: Requires NVIDIA Quantum-2 InfiniBand or Spectrum-4 Ethernet fabrics to utilize full bandwidth
- Rack Planning: 31.5" depth requires careful rack selection
- Power Infrastructure: ~12kW peak draw necessitates 3-phase power distribution
- Storage Configuration: E1.S drives enable high IOPS/low latency access but require specialized carriers
This architecture demonstrates how PCIe switching enables new GPU-to-network ratios, with implications for AI training cluster design where interconnect bandwidth often limits scaling efficiency. The elimination of traditional network card slots in favor of integrated ConnectX-8 interfaces represents a fundamental shift in GPU server design philosophy.
For detailed specifications, consult the ASRock Rack 4UXGM-GNR2 CX8 product page.

Comments
Please log in or register to join the discussion