Cloud Hypervisor 52 Adds AMD SEV‑SNP Confidential VM Support on KVM
#Regulation

Cloud Hypervisor 52 Adds AMD SEV‑SNP Confidential VM Support on KVM

Chips Reporter
4 min read

Version 52 of the open‑source Cloud Hypervisor now launches AMD SEV‑SNP encrypted VMs via Linux KVM, bringing measured boot, secure nested paging and new I/O features to cloud workloads and expanding the hypervisor’s appeal beyond Intel‑centric environments.

Announcement

The Cloud Hypervisor team released v52 on 15 May 2026, adding native support for AMD SEV‑SNP confidential virtual machines when running on Linux’s KVM. The update also patches a critical use‑after‑free bug in the VirtIO‑Block async path, introduces VFIO passthrough via iommufd/vfio‑cdev, adds multi‑connection TCP live migration, and brings an async QCOW2 backend powered by io_uring. All changes are available in the public GitHub snapshot and documented on the project’s site.

Cloud Hypervisor logo

Technical specifications

Feature Detail Impact
SEV‑SNP VM launch Cloud Hypervisor now calls KVM’s KVM_SEV_SNP_INIT and KVM_SEV_SNP_LAUNCH ioctls to provision encrypted guest memory, enable Secure Nested Paging, and record a measured boot hash. Enables AMD‑based confidential computing on the same VMM that previously only supported Intel TDX and Microsoft’s MSHV.
Measured boot Guest firmware and kernel hashes are stored in the SNP attestation report, allowing cloud operators to verify the exact software stack before runtime. Provides compliance‑ready evidence for regulated workloads (finance, health, government).
VFIO passthrough via iommufd Device IOMMU groups are exposed through the new iommufd file descriptor, and Cloud Hypervisor creates a vfio‑cdev for each device. Reduces the number of required file descriptors and improves hot‑plug latency for GPUs, NICs, and storage controllers.
Async QCOW2 with io_uring The block backend now submits read/write requests to an io_uring submission queue, avoiding the traditional thread‑per‑queue model. Benchmarks on an EPYC 9655 show a 23 % reduction in average I/O latency (from 1.84 µs to 1.42 µs) and a 15 % increase in throughput at 4 KB random workloads.
Multi‑connection TCP live migration Migration streams can be split across up to four TCP sockets, each handling a portion of the memory bitmap and device state. Improves migration speed by 1.8× on 100 Gbps fabrics compared with single‑socket migration.
Core scheduling option A new --cpu-scheduling flag lets operators pin vCPU threads to dedicated host cores or to a shared pool with CFS‑based weighting. Allows fine‑grained performance isolation for multi‑tenant clouds.

The SEV‑SNP support is limited to AMD EPYC 7004 “Genoa” and 9004 “Genoa‑2” silicon that expose the SEV‑SNP CPUID leaf. The hypervisor checks for the KVM_CAP_SEV_SNP capability at start‑up and aborts if the host kernel (≥ 6.8) does not expose the required ioctls. For reference, the measured‑boot flow mirrors the one described in AMD’s SEV‑SNP Architecture Specification (v1.03), with the additional step of inserting the guest’s RDX measurement into the VMCB before VMRUN.

Market implications

  1. Broader confidential‑computing adoption – By exposing SEV‑SNP through a lightweight Rust VMM, Cloud Hypervisor lowers the barrier for hyperscalers and niche cloud providers to offer encrypted VMs without licensing Intel TDX stacks. Early adopters such as Microsoft Azure and Alibaba Cloud have already signaled interest in a non‑Intel confidential‑compute path, which could diversify the market share of AMD’s EPYC line.
  2. Competitive pressure on proprietary hypervisors – VMware’s ESXi and Red Hat’s OpenShift Virtualization have begun integrating SEV‑SNP, but they rely on larger code bases and longer release cycles. Cloud Hypervisor’s six‑week cadence (v52 follows v51 by 42 days) demonstrates that open‑source projects can iterate faster, potentially shifting workload‑placement decisions toward platforms that can ship security features quickly.
  3. Supply‑chain resilience – The addition of AMD‑only confidential VMs reduces dependence on Intel’s hardware roadmap. For customers building multi‑cloud strategies, the ability to run the same VMM on both Intel TDX and AMD SEV‑SNP nodes simplifies orchestration layers and mitigates risk from single‑vendor shortages.
  4. Performance‑driven pricing – The io_uring‑based QCOW2 backend and multi‑socket live migration directly address cost concerns for high‑IOPS workloads. Cloud providers can now quote lower per‑GB‑month rates for encrypted storage while maintaining latency targets, a factor that could influence price competition in the IaaS segment.
  5. Ecosystem growth – The open‑source nature of Cloud Hypervisor encourages contributions from hardware vendors, OS distributors, and tooling companies. Projects such as Cyberus Tech’s Confidential‑Linux and Ant’s secure container runtime are already testing integrations, suggesting a growing ecosystem around Rust‑based VMMs that could eventually rival the traditional C‑centric hypervisor stack.

Overall, Cloud Hypervisor 52 marks a significant step toward a more heterogeneous confidential‑computing market, where AMD’s SEV‑SNP can be deployed with the same agility as Intel’s TDX. The combination of security, performance enhancements, and rapid release cadence positions the project as a compelling choice for cloud operators looking to diversify hardware vendors while keeping operational overhead low.

Further reading

Comments

Loading comments...