A developer proposes solutions for making existing Redis instances accessible to serverless applications via HTTP, sparking discussion around proxy patterns and trade-offs.

A common pain point emerges when migrating to serverless architectures: existing data infrastructure that doesn't fit the new model. Redis, with its TCP-based protocol, presents a particular challenge. Services like Upstash solve this by providing Redis-compatible endpoints over HTTP, but what happens when you already have Redis running on a client's VPS?
The core problem is architectural mismatch. Serverless functions typically operate statelessly behind firewalls with limited outbound connection options. Traditional Redis clients require persistent TCP connections, direct network access, and careful connection pooling. These assumptions break down in serverless environments where you can't guarantee:
- Long-lived connections (functions are ephemeral)
- Direct network reachability (VPCs, private subnets, NAT layers)
- Client-side connection management (each invocation is fresh)
The Proxy Question
This leads to the fundamental architectural question: how do you bridge these worlds?
Option 1: TCP Tunneling
You could establish persistent tunnels from serverless functions to the Redis instance. Tools like ssh -R or specialized tunneling services create a pathway. The trade-off is operational complexity - you're now managing tunnel infrastructure, dealing with connection stability, and potentially introducing single points of failure.
Option 2: HTTP Wrapper Build a lightweight HTTP service that wraps Redis commands. Each HTTP request maps to a Redis operation. This is essentially what Upstash does, but self-hosted. The benefits are clear: serverless-friendly, stateless, easy to secure. The costs: you're adding latency (HTTP overhead vs direct Redis), managing another service, and potentially losing some Redis features (pub/sub, streaming operations become awkward).
Option 3: Accept the TCP Hassle Some serverless platforms (AWS Lambda with VPC, Google Cloud Functions) allow TCP connections from functions. You could configure VPC peering, security groups, and use Redis clients with aggressive connection pooling. The trade-off: platform lock-in, complex networking setup, and cold-start penalties as connections re-establish.
The Trade-off Space
Each approach has different characteristics:
Latency: Direct TCP is fastest, HTTP adds 10-50ms overhead, tunneling adds variable latency based on network path.
Operational Complexity: HTTP proxy is simplest to run; tunneling requires monitoring and failover; TCP networking demands expertise in cloud VPCs.
Scalability: HTTP proxies scale horizontally easily; TCP connections require careful pooling; tunneling can become a bottleneck.
Cost: HTTP proxy adds compute cost; tunneling adds infrastructure cost; TCP networking adds complexity cost (engineer time).
A Possible Direction
The discussion in the Redion repository seems to explore a hybrid approach. The idea likely involves creating a protocol bridge that maintains Redis compatibility while exposing HTTP endpoints. This could involve:
- A stateless proxy that translates HTTP requests to Redis commands
- Connection pooling on the proxy side (so serverless functions stay stateless)
- WebSocket support for streaming operations (GET, SUBSCRIBE)
- Authentication layer to secure the HTTP interface
The key insight is that you're essentially trading connection state for operational simplicity. The proxy maintains the TCP connections to Redis; functions just make HTTP calls.
Real-World Considerations
If you're building this, several patterns emerge:
Command Mapping: Most Redis operations map cleanly to HTTP. GET becomes GET /key, SET becomes POST with body. But LIST operations, SCAN, and pub/sub need careful design.
Connection Management: The proxy needs robust connection pooling to Redis. Each serverless function invocation shouldn't trigger a new Redis connection - that's the bottleneck.
Statelessness: The proxy itself must be stateless to scale horizontally. Session affinity or external session storage might be needed for certain operations.
Performance: HTTP/2 or HTTP/3 helps with connection reuse. Consider batching multiple Redis commands in single HTTP requests.
The Broader Pattern
This isn't just about Redis. It's a recurring pattern: how do you adapt stateful, connection-oriented services to serverless, stateless architectures? We see similar challenges with:
- PostgreSQL (solved by connection poolers like PgBouncer, or HTTP layers like PostgREST)
- Message queues (Kafka REST proxies, AWS SQS HTTP API)
- Traditional databases (various HTTP bridges)
The pattern is: add a stateless translation layer that handles connection management and protocol conversion.
Questions for the Community
The GitHub discussion raises important questions:
What operations are must-have? Simple key-value is easy. Streaming, pub/sub, transactions are harder.
How important is Redis compatibility? Do you need full protocol support, or just a subset?
Where should authentication live? At the HTTP layer, or pass-through to Redis AUTH?
What's the failure model? If Redis goes down, does the proxy return 503s or queue requests?
Do you need WebSocket support? Server-sent events or WebSockets for streaming operations?
Existing Solutions
Before building, consider:
- Redis HTTP modules: Some Redis modules expose HTTP endpoints
- REST Redis: Projects that create REST APIs for Redis
- Custom proxy: Build exactly what you need
- Managed services: Upstash, Redis Cloud with HTTP APIs
The Bottom Line
The trade-off is between operational simplicity and performance. An HTTP proxy adds latency and a service to manage, but eliminates networking complexity and makes serverless integration trivial. For many use cases, that's a winning trade.
The question isn't whether this is possible - it clearly is. The question is: what's the right abstraction level? Should this be a generic Redis-over-HTTP bridge, or something more opinionated that optimizes for specific serverless patterns?
If you're wrestling with this problem, the discussion thread is worth following. It's the kind of practical architecture question that doesn't have one right answer, but many valid trade-offs.
What's your experience? Have you faced this Redis/serverless gap? What solution did you choose, and what would you do differently next time?

Comments
Please log in or register to join the discussion