Postgres vs. Redis: The Surprising Pragmatism of Caching in SQL
Share this article
In the perennial debate over database specialization versus consolidation, one argument surfaces repeatedly: "Use Postgres for everything." But how does this philosophy hold up for high-velocity workloads like caching? To find out, an engineer conducted a rigorous benchmark pitting Redis against PostgreSQL using unlogged tables—with surprising conclusions about pragmatic architecture.
The Testing Ground
The experiment deployed on a Kubernetes cluster with strict resource constraints:
- Database nodes: Limited to 2 vCPUs and 8GB RAM
- HTTP server: Simple Go service with /get and /set endpoints
- Benchmarking: k6 load testing with 30 million pre-seeded keys
Cache interfaces were standardized for both backends:
type Cache interface {
Get(ctx context.Context, key string) (string, error)
Set(ctx context.Context, key string, value string) error
}
PostgreSQL used unlogged tables to minimize write overhead:
CREATE UNLOGGED TABLE IF NOT EXISTS cache (
key VARCHAR(255) PRIMARY KEY,
value TEXT
);
Performance Showdown
Read Workloads (80% hit rate)
- Throughput: Redis: 7,425 RPS | PostgreSQL: 1,420 RPS
- Latency: Redis p95: 56ms | PostgreSQL p95: >2000ms
- Resource: PostgreSQL saturated 2 CPUs; Redis used 1.28 cores
Write Workloads
- Throughput: Redis: 1,900 RPS | PostgreSQL: 600 RPS (unlogged)
- Latency: Redis p95: 140ms | PostgreSQL p95: >3000ms
Mixed Workload (80% reads, 20% writes)
- Redis maintained 5,200 RPS with sub-200ms latency
- PostgreSQL struggled at 1,200 RPS with 1500ms+ delays
Unlogged tables proved critical for Postgres writes—logged tables performed 10x worse during writes. Yet even optimized, PostgreSQL consumed more memory (6GB vs Redis's 4.3GB) and couldn't escape CPU saturation.
Why Postgres Anyway? The Pragmatist's Argument
Despite Redis's dominance, the author advocates PostgreSQL for most projects:
"Almost always, my projects need a database. Not having to add another dependency comes with its own benefits... 7425 requests per second is still a lot. That’s more than half a billion requests per day."
Key trade-offs highlight a fundamental architectural choice:
1. Operational Simplicity: Avoid managing separate data stores
2. Adequate Scale: Most applications won't exceed PostgreSQL's caching capacity
3. Interface Flexibility: Abstract cache layer enables future Redis adoption
4. Transactional Consistency: Single database simplifies data integrity
The Hidden Cost of "Just Add Redis"
While Redis delivered 3-12x higher throughput, the benchmark reveals hidden costs of specialization:
- Infrastructure overhead: Additional monitoring, backups, and failover mechanisms
- Network complexity: Inter-service communication introduces new failure modes
- Developer context-switching: Multiple query languages and client libraries
As the author notes, scaling PostgreSQL vertically or introducing Redis later remains viable. For startups and mid-scale systems, the cognitive load reduction of a unified stack often outweighs suboptimal cache performance.
When Speed Isn't Everything
This experiment validates Redis as the superior caching engine—yet disproves its necessity for most applications. In a world obsessed with micro-optimization, sometimes the slower solution accelerates delivery. As the data shows: Architecture isn't just about performance; it's about optimizing for human efficiency.
Source: Benchmark data and methodology from Redis is fast. I’ll cache in Postgres.