This article explains Kafka's safe producer defaults, what they mean, and how version compatibility between brokers and clients affects them. It covers the settings that ensure idempotent message delivery, proper ordering, and safe retries, with special focus on changes introduced in Kafka version 3.0.
In the previous article Kafka Retries and Idempotent Producers Explained, we discussed how idempotent producers prevent duplicate messages in Kafka even with retries. In this article, we will explore Kafka safe producer defaults, what they mean, and how version compatibility between brokers and clients affects them.
What Does "Safe Producer" Mean in Kafka?
A safe producer ensures that messages are:
- Written without duplicates (idempotence)
- Preserved in correct order per partition
- Retried safely if transient failures occur
Kafka achieves this with the following producer settings:
- enable.idempotence=true
- acks=all
- retries=Integer.MAX_VALUE
- max.in.flight.requests.per.connection=5
- delivery.timeout.ms=120000
Note: min.insync.replicas is a broker/topic-level setting and must be configured for full durability.
Kafka Version ≥ 3.0 — What It Covers
When we say: "Kafka ≥ 3.0 has safe producer enabled by default" we are talking about the producer client behavior, supported by the broker version ≥ 3.0.
What is enabled automatically?
- enable.idempotence=true
- acks=all
- retries=Integer.MAX_VALUE
- max.in.flight.requests.per.connection=5
Important: delivery.timeout.ms=120000 is also default but not tied to idempotence.
What this does NOT cover
- Consumer behavior → consumers still need to handle duplicates if necessary
- Other Kafka components → Streams, Connect, etc., are unaffected
- Broker settings → durability depends on replication and min.insync.replicas
Version Compatibility: Broker vs Producer
Kafka broker version and producer client version are separate:
| Component | Role | Version dependency |
|---|---|---|
| Broker | Kafka server/cluster | Defines feature support (≥3.0 enables safe defaults) |
| Producer | Kafka client | Implements safe producer defaults; must match features with broker |
| Consumer | Kafka client | Reads messages; independent of producer defaults |
What Happens with Mixed Versions?
| Broker Version | Producer Version | Safe Producer Defaults Applied? |
|---|---|---|
| ≥ 3.0 | ≥ 3.0 | Automatic |
| ≥ 3.0 | < 3.0 (e.g., 2.1) | Must manually enable enable.idempotence, acks=all, etc. |
| < 3.0 | any | Must manually enable safe producer configs |
Key insight: Even if your broker is ≥3.0, using an older producer client will not automatically enable safe producer defaults.
When Should You Explicitly Configure Safe Producer Settings?
- Legacy systems (Kafka ≤ 2.8) - Always configure enable.idempotence=true, acks=all, etc. manually.
- Mixed-version clusters - Explicit config ensures consistent behavior across old and new clients.
- Critical systems - For payments, order processing, or inventory management, explicit configs prevent duplicates and maintain ordering.
- Upgrades - When migrating brokers or clients, explicit settings help maintain predictable behavior.
Recommended Safe Producer Configuration
Even with Kafka ≥ 3.0, explicitly setting configs can improve clarity:
- acks=all
- enable.idempotence=true
- retries=Integer.MAX_VALUE
- max.in.flight.requests.per.connection=5
- delivery.timeout.ms=120000
This ensures high reliability, correct ordering, and duplicate-free message delivery.
Bottom Line
- Safe producer = producer client behavior, not broker or consumer
- Broker ≥3.0 supports safe defaults, but older clients must be configured manually
- Explicit configuration is still recommended for critical systems or mixed-version clusters
- Understanding producer vs broker vs consumer roles avoids common pitfalls in Kafka message delivery
Summary
- Kafka safe producer guarantees idempotent writes and correct ordering per partition
- Defaults are automatic in broker ≥3.0 with modern clients
- For older clients or mixed clusters, safe producer configs must be explicitly set
- Proper broker settings (min.insync.replicas) are still required for full durability
- Ensuring safe producer behavior is essential for reliable Kafka pipelines, especially in distributed, event-driven systems.
If you found this useful and want to share your thoughts leave a comment if you'd like. I always appreciate feedback and different perspectives.
Originally published on my personal blog: 🔗 https://rajeevranjan.dev/blog/kafka/kafka-safe-producer-defaults-compatibility/

Comments
Please log in or register to join the discussion