Linux Drops HIPPI: End of an Era for Supercomputing Networking
#Hardware

Linux Drops HIPPI: End of an Era for Supercomputing Networking

Hardware Reporter
2 min read

The Linux kernel removes 30-year-old HIPPI networking support, eliminating 3,000 lines of obsolete code that powered early supercomputer clusters.

Twitter image

The Linux kernel is officially retiring HIPPI (High-Performance Parallel Interface) support in the upcoming Linux 7.0 release, marking the end of a networking standard that once connected supercomputers in the late 1980s and early 1990s. This removal eliminates nearly 3,000 lines of unmaintained code - a significant cleanup for the networking subsystem.

HIPPI was revolutionary for its time as the first near-gigabit networking standard, delivering 800 Mb/s speeds over copper cables up to 25 meters. Designed specifically for high-performance computing environments, it connected Cray supercomputers and scientific workstations when consumer Ethernet topped out at 10 Mb/s. Its channel-based architecture used 32-bit parallel data paths and required bulky, specialized cabling that consumed significantly more power per bit than modern standards.

LINUX NETWORKING

Performance Comparison: Then vs. Now

Specification HIPPI (1990s) Modern 10GbE Modern 100GbE
Bandwidth 800 Mb/s 10 Gb/s 100 Gb/s
Max Distance 25m (copper) 100m (CAT6A) 100m (fiber)
Power Efficiency ~15W per port ~3W per port ~5W per port
Latency ~100μs <1μs <0.3μs
Protocol Overhead 30% <5% <3%

HIPPI's technical limitations became apparent as alternatives emerged:

  • Fibre Channel surpassed it in bandwidth (1Gb/s) and distance (10km) by the mid-90s
  • Gigabit Ethernet (1999) offered backward compatibility with existing infrastructure
  • Modern RDMA over Converged Ethernet (RoCE) now delivers microsecond latency at 400Gb/s

The removal commit notes HIPPI code hadn't received meaningful maintenance in over two decades. While the kernel retains the if_hippi.h header for TUN compatibility, all driver code and core logic are being excised.

For modern high-performance builds:

  1. Homelab clusters: Use 10GbE SFP+ NICs ($30-$80 used) with DAC cables
  2. AI/Compute Nodes: Deploy 100GbE InfiniBand or RoCE solutions
  3. Low-Power Systems: Leverage 2.5GbE chipsets consuming under 2W at idle

Benchmark tests show contemporary solutions outperform HIPPI by orders of magnitude - a single 100GbE link moves data 125x faster while using 70% less power per gigabit. This retirement demonstrates Linux's evolution from supporting legacy supercomputing hardware to optimizing for cloud-native and AI workloads where density and efficiency dominate.

Comments

Loading comments...