At the 2025 OpenZFS Developer Summit, ZettaLane presented objbacker.io, a native VDEV implementation that bypasses FUSE to run ZFS directly on object storage, achieving 3.7 GB/s read throughput from S3, GCS, and Azure Blob Storage.

The economics of cloud storage have always created a difficult trade-off for organizations needing file storage. At $96,000 per year for 100TB on AWS EBS gp3 and $360,000 per year for 100TB on AWS EFS, traditional cloud block storage quickly becomes prohibitively expensive at scale. This cost structure has driven many teams to explore object storage as a backend for NAS solutions, but the performance story has been problematic—until now.
At the OpenZFS Developer Summit 2025 in Portland, Oregon, ZettaLane presented their solution to this challenge: MayaNAS with objbacker.io, a native ZFS VDEV implementation that achieves 3.7 GB/s sequential read throughput directly from S3, GCS, and Azure Blob Storage without FUSE overhead.
The FUSE Problem with ZFS on Object Storage
Most approaches to running ZFS on object storage rely on FUSE-based filesystems like s3fs or goofys. These tools mount an object storage bucket as a filesystem, and then ZFS runs on top of that mount. While functional, this architecture introduces significant overhead:
Traditional FUSE approach:
- ZFS → VFS → FUSE kernel module → userspace FUSE daemon → FUSE kernel module → VFS → s3fs → S3 API
Every I/O operation crosses the kernel-userspace boundary twice, creating context switch overhead that limits throughput. Additionally, FUSE implementations often perform poorly with random I/O patterns and struggle with the eventual consistency models of some object storage systems.
objbacker.io: Native VDEV Integration
ZettaLane's approach with objbacker.io fundamentally changes this architecture. Instead of treating object storage as a filesystem layer, they implemented a native ZFS VDEV type called VDEV_OBJBACKER that communicates directly with a userspace daemon via a character device at /dev/zfs_objbacker.
objbacker.io architecture:
- ZFS → /dev/zfs_objbacker → objbacker.io daemon → Native cloud SDK → Object storage
This direct path eliminates the FUSE overhead entirely. The daemon uses native cloud SDKs (AWS SDK, Google Cloud SDK, Azure SDK) for direct object storage access, handling ZIO (ZFS I/O) operations with minimal latency.
How ZIO Operations Map to Object Storage
The implementation translates ZFS I/O operations directly to object storage primitives:
- ZIO_TYPE_WRITE → PUT object
- ZIO_TYPE_READ → GET object
- ZIO_TYPE_TRIM → DELETE object (for TRIM/unmap operations)
- ZIO_TYPE_IOCTL (sync) → USYNC (flush pending writes)
This direct mapping works efficiently because of a critical design decision: aligning ZFS recordsize with object storage characteristics. By setting ZFS recordsize to 1MB, each ZFS block maps directly to a single object storage object. This alignment means aligned writes go directly as PUT requests without caching, and object storage performs optimally with large, aligned operations.
Object Naming and Data Layout
objbacker.io uses an S3backer-compatible layout for object naming. When you write a 5MB file with 1MB recordsize, it creates five objects at offsets 0, 1MB, 2MB, 3MB, and 4MB, named sequentially as bucket/00001, bucket/00002, etc. This predictable pattern allows ZFS to efficiently manage data placement and recovery.

Performance Validation
The benchmark results presented at the summit were compelling. Using AWS c5n.9xlarge instances (36 vCPUs, 96 GB RAM, 50 Gbps network) with a 6-bucket striped ZFS pool:
- Sequential Read: 3.7 GB/s from S3
- Sequential Write: 2.5 GB/s to S3
The FIO test configuration that achieved these results used:
- ZFS recordsize: 1MB
- Block size: 1MB
- 10 concurrent FIO jobs
- 10GB per job file size
- sync I/O engine (POSIX synchronous I/O)
The key to saturating network bandwidth was parallel bucket I/O. With six S3 buckets configured as a striped pool, ZFS parallelizes reads and writes across multiple object storage endpoints simultaneously.
Hybrid Architecture with ZFS Special Devices
MayaNAS doesn't just use object storage exclusively—it leverages ZFS's special device architecture to create a two-tier storage system:
- Metadata and small blocks (<128KB) → Local NVMe SSD
- Large blocks (1MB+) → Object storage backend
This hybrid approach recognizes that metadata operations require low latency and high IOPS, while large sequential data needs throughput rather than IOPS. The result is one filesystem with two performance tiers, optimized for both performance and cost.
MayaScale: Complementary Block Storage
While MayaNAS addresses file storage, ZettaLane also presented MayaScale, their NVMe-oF block storage solution for workloads requiring sub-millisecond latency. MayaScale uses local NVMe SSDs with Active-Active HA clustering and provides multiple performance tiers:
| Tier | IOPS (Read/Write) | Latency |
|---|---|---|
| Ultra | 585K / 1.1M | 280 µs |
| High | 290K / 1.02M | 268 µs |
| Medium | 175K / 650K | 211 µs |
| Standard | 110K / 340K | 244 µs |
| Basic | 60K / 120K | 218 µs |
Together, MayaNAS and MayaScale cover the spectrum from high-throughput file storage to low-latency block storage.
Multi-Cloud Consistency
Both solutions deploy consistently across AWS, Azure, and GCP using:
- AWS: CloudFormation templates
- Azure: ARM templates and Marketplace integration
- GCP: Terraform modules and Marketplace
The underlying ZFS configuration and management interface remain identical; only the cloud-specific networking and storage APIs differ.
The Broader Context: Object Storage as Backend
This work enters an ongoing debate about using object storage as a filesystem backend. Critics point to eventual consistency, latency, and the mismatch between object storage APIs and filesystem semantics. Proponents highlight the cost savings and scalability.
ZettaLane's argument is that native VDEV integration addresses many of these concerns. By working at the ZFS level rather than through a FUSE layer, they can:
- Implement proper error handling and retry logic
- Take advantage of ZFS's existing data integrity features
- Maintain atomicity for metadata operations on local NVMe
- Achieve performance that approaches local NVMe for sequential workloads
The 3.7 GB/s throughput demonstrates that with proper alignment and native integration, object storage can handle substantial I/O loads. The question becomes whether this performance holds up across different object storage providers' consistency models and whether the hybrid architecture provides sufficient cache hit rates for metadata-heavy workloads.
Getting Started
ZettaLane provides deployment templates for all three major clouds:
- AWS: CloudFormation templates for both MayaNAS and MayaScale
- Azure: ARM templates and Marketplace listings
- GCP: Terraform modules and Marketplace integration
The complete 50-minute presentation will be available on the OpenZFS YouTube channel once published by the summit organizers.
What This Means for Cloud Storage Economics
If the performance claims hold in production environments, MayaNAS with objbacker.io could reduce storage costs by 70%+ compared to traditional cloud block storage for appropriate workloads. The hybrid architecture means organizations can maintain high performance for metadata and small files while scaling bulk data storage cost-effectively.
The native VDEV approach also suggests a path forward for other cloud-native ZFS implementations. Rather than treating cloud storage as an external filesystem, integrating directly with ZFS's VDEV layer could unlock new performance characteristics across different storage backends.
The real test will be how this performs with production workloads that have different I/O patterns than the sequential benchmarks presented. Random read/write performance, metadata-heavy operations, and recovery times after failures will determine whether this becomes a mainstream solution or remains specialized for archival and backup scenarios.
For teams currently paying premium prices for cloud block storage, the cost savings alone warrant investigation. The ability to maintain ZFS features like snapshots, compression, and checksumming on object storage makes this particularly attractive for data protection and compliance use cases.

Related Resources:

Comments
Please log in or register to join the discussion