SK hynix and SanDisk unveil High Bandwidth Flash standard for AI inference servers
#Hardware

SK hynix and SanDisk unveil High Bandwidth Flash standard for AI inference servers

Chips Reporter
4 min read

SK hynix and SanDisk have jointly announced High Bandwidth Flash (HBF), a new storage standard positioned between HBM DRAM and traditional SSDs, targeting inference AI servers with speeds potentially exceeding 10 GB/s per chip.

SK hynix and SanDisk have jointly announced High Bandwidth Flash (HBF), a new storage standard positioned between HBM DRAM and traditional SSDs, targeting inference AI servers with speeds potentially exceeding 10 GB/s per chip.

Featured image

The AI Storage Bottleneck

As AI workloads continue to push the boundaries of data center capabilities, traditional NAND flash storage is proving insufficient for the demands of modern inference servers. Contemporary server-grade SSDs can reach speeds of 28 GB/s per unit, but this performance ceiling is becoming a critical bottleneck in AI infrastructure.

The announcement comes at a time when data center operators are grappling with multiple challenges. Photonics and high-speed data movement represent the next major AI bottleneck, while massive AI data center buildouts are already straining energy supplies across regions. The introduction of HBF appears to be a direct response to these converging pressures.

Technical Specifications and Performance Targets

The official press release provides limited technical details, but the positioning of HBF as a layer between HBM DRAM and flash SSDs offers important clues about its intended performance envelope. With current-generation HBM3E delivering approximately 1.2 TB/s per stack, HBF chips would need to achieve speeds of at least 10 GB/s each to justify their existence as an intermediate storage layer.

This performance target would enable combined speeds in the hundreds of GB/s range when deployed across multiple chips, significantly exceeding the capabilities of traditional SSDs while remaining more cost-effective than DRAM for large-scale deployments.

Power Efficiency Considerations

Power efficiency emerges as a central design consideration for the HBF standard. The announcement specifically mentions this concern, which is particularly relevant given current data center power consumption trends. A high-end Micron 9650 SSD, for example, consumes 25 W at full load, and when scaled to exabyte-level deployments involving tens of thousands of drives, power consumption becomes a critical operational constraint.

System Integration and Architecture

The vague description of HBF as a "supporting layer" suggests multiple possible implementation approaches. One possibility is that HBF could function similarly to an on-SSD cache but at a much larger scale, providing a high-speed buffer between DRAM and traditional storage. Alternatively, HBF might operate as a high-speed block storage device comparable to Intel's Optane technology, requiring application and operating system modifications to utilize efficiently.

Timeline and Industry Adoption

While no specific release date has been announced, the companies indicate that "demand of complex memory solutions, including HBF, will pick up around 2030." This timeline suggests that HBF is still in the early development and standardization phases, with production deployments likely several years away.

The standard will be managed under the Open Compute Project, indicating a commitment to industry-wide adoption rather than proprietary implementation. This approach could accelerate ecosystem development and ensure broader compatibility across different hardware platforms.

Market Focus: Inference Servers

The decision to target inference servers specifically reflects the evolving AI landscape. As AI models become more sophisticated and widely deployed, the volume of inference outputs requiring storage continues to grow exponentially. These outputs need to be stored somewhere, and traditional storage solutions are struggling to keep pace with both the speed and volume requirements of modern AI workloads.

Industry Context and Implications

The HBF announcement represents a significant evolution in storage architecture, acknowledging that the traditional NAND SSD paradigm is reaching its limits for certain high-performance applications. By creating a new standard that bridges the gap between DRAM and flash storage, SK hynix and SanDisk are positioning themselves at the forefront of the next generation of AI infrastructure.

This development also highlights the increasing specialization of data center hardware, with different components optimized for specific workloads rather than general-purpose performance. As AI continues to drive data center evolution, we can expect to see more such specialized solutions emerging to address the unique challenges of machine learning and inference workloads.

Microsoft data center in Mount Pleasant, Wisconsin

The success of HBF will depend on several factors, including its actual performance characteristics, power efficiency, cost-effectiveness, and the willingness of system manufacturers to adopt the new standard. However, given the pressing needs of the AI industry and the involvement of major players like SK hynix and SanDisk, HBF represents a significant step toward addressing the storage challenges that are currently limiting AI infrastructure scalability.

Comments

Loading comments...