Tansu.io: A Stateless Kafka Alternative That Scales to Zero
#Infrastructure

Tansu.io: A Stateless Kafka Alternative That Scales to Zero

DevOps Reporter
5 min read

Peter Morgan unveiled Tansu at QCon London 2026, a radically different approach to Kafka that replaces stateful brokers with stateless ones backed by durable external storage, enabling 20MB memory footprints and instant scaling.

At QCon London 2026, Peter Morgan introduced Tansu, an open-source, Apache Kafka-compatible messaging broker that challenges the fundamental assumptions behind Kafka's architecture. After two years of solo development, Morgan presented a system that keeps Kafka's protocol but discards everything else—no replication, no leader elections, no permanent broker state.

The Core Premise: Storage is Already Durable

Morgan's central insight is simple but profound: Kafka achieves resilience through data replication between brokers. Tansu assumes storage is already durable and resilient, then builds everything from that premise. This seemingly small shift has massive architectural consequences.

Traditional Kafka brokers are what Morgan calls "pets." They have identities, require extensive configuration, run 24/7 with 4GB heaps, and scaling them down is so rare that when he asked the audience who actually does it, only one hand went up. Tansu brokers are "cattle." They carry no state, have no leaders, run in about 20MB of resident memory, and can scale to zero and back up in roughly 10 milliseconds.

Live Demo: Scaling to Zero on Fly.io

In a compelling live demonstration, Morgan deployed Tansu to Fly.io as a 40MB statically linked binary in a from-scratch container image—no operating system, just the binary and some SSL certificates. He configured it to scale to zero using Fly's proxy, created a topic with standard Kafka CLI tools, produced a message, killed the broker, and then consumed the message. The broker woke up automatically when the consumer connected. The entire deployment ran on a 256MB machine.

Pluggable Storage Architecture

Where Tansu gets truly interesting is its storage architecture. Rather than a single built-in storage engine, it offers pluggable backends selected via a URL parameter:

  • S3 (or compatible stores like Tigris and R2): For diskless operation
  • SQLite: For development environments where you want to copy a single file to reset state between test runs
  • Postgres: For teams that want their streaming data to land directly in a database

Morgan was candid about his favorite: Postgres. The original motivation for the project, he explained, was watching data flow through Kafka topics only to end up in a database anyway, and wondering why the intermediate step was necessary.

Postgres Integration: Beyond Simple Storage

The Postgres integration goes beyond using it as a store. Morgan showed how Tansu originally used sequential INSERT statements to write records, which became a bottleneck because each execution requires a round-trip response. He replaced this with Postgres's COPY FROM protocol, which streams rows into the database without waiting for individual acknowledgements. A single COPY FROM setup, a stream of COPY DATA messages, and one COPY DONE at the end. The result is substantially higher throughput for batch ingestion.

And because a produce in Tansu is just an INSERT (or COPY) and a fetch is just a SELECT, the transactional outbox pattern simply disappears: you can atomically update business data and queue a message in the same database transaction using a stored procedure that Tansu provides.

Schema Validation: Broker-Side Enforcement

Schema validation is another area where Tansu diverges from Kafka. In standard Kafka, schema enforcement relies on a separate registry and is optional at the client. In Tansu, if a topic has a schema—Avro, JSON, or Protobuf—the broker validates every record before writing it. Invalid data gets rejected at the broker, not the client.

Morgan described this as a deliberate trade-off: it's slower than Kafka's pass-through approach because the broker must decompress and validate each record, but it guarantees data consistency regardless of which client produces.

That broker-side schema awareness also enables something Tansu does that Kafka cannot: writing validated data directly into open table formats. If a topic has a Protobuf schema, Tansu can automatically write records to Apache Iceberg, Delta Lake, or Parquet, creating tables, updating schemas on change, and handling metadata.

Morgan commented: "It actually works for AVRO, JSON, and Protobuf. Protobuf is the 'best' because it has a built-in mechanism for backwards-compatible schema changes (and the one I used in the demo), but they can all be written as Parquet/Iceberg/Delta."

A "sink topic" configuration skips the normal storage entirely and writes exclusively to the open table format, turning Tansu into a direct pipeline from Kafka-compatible producers to analytics-ready data.

Performance as a Proxy

As a proxy, Tansu can also sit in front of an existing Kafka cluster, proxying requests at 60,000 records per second with sub-millisecond P99 latency on modest hardware—13 megabytes of RAM on a Mac Mini.

Current Limitations and Future Plans

Morgan was upfront about the gaps. SSL support is present but being reworked. There's no throttling or access control lists yet. Compaction and message deletion aren't implemented on S3. Share groups are not planned.

The project is written in asynchronous Rust, Apache-licensed, and actively looking for contributors. All examples, including the Fly.io deployment demo, are available on GitHub.

Why This Matters

Tansu represents a fundamental rethinking of how we approach event streaming. By leveraging modern cloud storage's durability and embracing stateless architecture, it offers a compelling alternative for teams who want Kafka's ecosystem without Kafka's operational complexity. The ability to scale to zero, the direct Postgres integration, and the schema-validated open table format writes suggest a future where event streaming becomes even more tightly integrated with data warehousing and analytics workflows.

The provocative question Morgan poses—what if we kept Kafka's protocol but threw out everything else?—might just be the right question for teams looking to simplify their streaming infrastructure in an era of durable, scalable cloud storage.

Comments

Loading comments...