AMD Denies MI455X Delays as Nvidia's VR200 Platform Rumors Point to Earlier Release
#Chips

AMD Denies MI455X Delays as Nvidia's VR200 Platform Rumors Point to Earlier Release

Chips Reporter
3 min read

AMD refutes claims of MI455X production delays, maintaining its Helios systems are on track for H2 2026, while Nvidia's Vera Rubin platform may arrive 3-6 months early according to industry analysts.

AMD has pushed back against a report suggesting delays in the production and adoption of its next-generation Instinct MI455X AI accelerators, while Nvidia's Vera Rubin platform for AI data centers may arrive earlier than anticipated.

According to a report by SemiAnalysis, AMD's first rack-scale MI455X UALoE72 system would see engineering samples and low-volume production in H2 2026, but mass production ramp and first production tokens wouldn't be generated until Q2 2027 due to manufacturing delays. The report claimed that the Helios systems, which pack 72 Instinct MI455X AI accelerators with 31 TB of HBM4 memory, would deliver 2.9 FP4 exaFLOPS for AI inference and 1.4 FP8 exaFLOPS for AI training.

Anush Elangovan, corporate vice president of AMD's software development, was quick to refute these claims on X (formerly Twitter), stating: "Well, your assessment is still wrong. On target for 2H 2026."

AMD AI servers

The Helios rack-scale solution represents AMD's ambitious entry into the high-performance AI server market. Initially, these systems were expected to use UALink interconnections for scale-up connectivity to maximize performance. However, it appears that at least the initial Helios machines will utilize UALink over Ethernet, which would result in lower performance compared to native UALink connectivity.

This potential performance limitation may be connected to broader UALink ecosystem challenges. Astera Labs, a leading developer of connectivity solutions, recently confirmed that UALink-based platforms would ramp in 2027 rather than 2026. Jitendra Mohan, CEO of Astera Labs, stated during a conference call: "Recent public roadmap announcements from AWS and AMD along with other ongoing engagements indicate a broad adoption. UALink remains the highest performance and lowest latency fully open solution for AI scale up connectivity, and we will be ready to intercept the initial customer platform ramps in 2027."

Meanwhile, Nvidia appears to be gaining momentum in the AI accelerator race. According to Evercore ISI analyst Mark Lipacis, Nvidia's NVL72 VR200 platform may be released as early as Q2 2026, three to six months ahead of schedule. This acceleration is reportedly enabled by leveraging suppliers that traditionally served the China market for worldwide product development.

An Evercore note to clients stated: "Some believe that China ban has enabled Nvidia to leverage suppliers that have typically served China to work on worldwide product development, enabling Rubin to be 3 – 6 months ahead of schedule. Some would not be surprised if Rubin shipments happen by end of Q2 2026. Hyperscalers note that Vera CPU, Rubin GPU [are] already in fabrication and running test/validation."

Jensen Huang, Nvidia's CEO, had previously announced that the Vera Rubin platform was in production as of early January, suggesting that some of Nvidia's closest customers could receive the new AI platform earlier than expected.

If these timelines hold true, Nvidia could strengthen its leadership position in the AI market for the coming year, as developers of frontier AI models continue to rely heavily on Nvidia's hardware ecosystem. The potential delay in AMD's mass production ramp, combined with an accelerated Nvidia release schedule, could significantly impact the competitive landscape in AI data center infrastructure.

The contrasting trajectories of these two AI platforms highlight the intense competition in the AI accelerator market, where even months of difference in product availability can translate to significant market share advantages. As both companies race to deliver the next generation of AI computing power, the industry will be watching closely to see which platform ultimately delivers on its promises and captures the imagination of AI developers and data center operators.

Comments

Loading comments...