The Shared Database Trap: Why One Database Per Service is the Path to Architectural Freedom

Article illustration 1

The question "Should we use one database for all our services, or should each service have its own?" echoes through development teams worldwide. The allure of simplicity is powerful—fewer databases mean less operational overhead, easier backups, and a single monitoring dashboard. But as with many architectural shortcuts, the initial convenience masks long-term consequences that can cripple system evolution.

This isn't a debate specific to any particular database technology, whether it's PostgreSQL, MongoDB, or an event store like EventSourcingDB. It's a fundamental question of service boundaries and coupling that affects every distributed system. And as teams who've followed the path of shared databases can attest, what begins as elegant simplicity often becomes a tangled web of dependencies and constraints.

The Allure of Shared Databases

For small teams with limited resources, the shared database approach seems pragmatic and efficient. With one database, you have:

  • A single backup strategy
  • One set of credentials
  • A unified monitoring dashboard
  • Reduced operational burden

In the early days of a project, this approach delivers on its promise. Your first few services can share data effortlessly. Need customer information in the order service? Just query the customers table directly. No APIs, no synchronization, no data duplication. The system feels fast, convenient, and simple.

The shared database also offers apparent data consistency advantages. When everything lives in one database, you can use transactions to ensure atomicity. Update the order and the inventory in a single transaction, and either both succeed or both fail. This eliminates the complexities of distributed transactions and eventual consistency that come with service isolation.

The Hidden Costs of Shared Databases

The problems with shared databases don't appear on day one. They emerge gradually as the system grows, teams expand, and services become more complex. By then, the architectural decisions have become deeply embedded, making changes difficult and risky.

Schema Changes Become Coordination Nightmares

When multiple services depend on the same tables, every schema change requires cross-team coordination. Want to rename a column? You'll need to check with the order team, the billing team, and the analytics team first. Adding a required field means ensuring every service handles it correctly before deployment. Removing deprecated data becomes a detective exercise to identify who still depends on it.

What should be independent development becomes a carefully choreographed dance. Release cycles slow as teams wait for each other. The promise of autonomous services dissolves into shared dependencies and synchronized deployments. A "simple" schema migration can evolve into a multi-team project spanning weeks of coordination.

The Database Becomes a Hidden API

Services communicate not through well-defined interfaces, but through shared tables. Service A writes a row; Service B reads it. There's no contract, no versioning, no clear ownership, and no documentation of what consumers expect.

This hidden coupling is invisible until something goes wrong. Change the format of a column, and consumers break silently. Add a new status value that readers don't understand. Remove a column you believe is unused, only to discover something fails in production. The coupling is subtle but powerful, creating fragility that's difficult to detect until runtime.

Business Rules Get Scattered or Bypassed

When Service A can write directly to tables that conceptually belong to Service B, it can easily violate invariants that Service B would have enforced. Or worse, both services implement the same validation logic differently, leading to subtle inconsistencies.

The database itself can't enforce business rules—it only stores data. It doesn't understand that orders can only be shipped after payment is confirmed, or that customer addresses must be validated before use. Only the owning service can ensure that business rules are followed. When multiple services write to the same tables, those rules get scattered, duplicated, or ignored entirely.

Ownership Becomes Unclear

When multiple services write to the same tables, questions of authority emerge: Who is the definitive source of truth? When data is inconsistent, who fixes it? When a bug causes data corruption, who investigates? When performance degrades, who optimizes?

Clear ownership is the foundation of service autonomy. A shared database erodes that foundation, causing boundaries to blur and responsibilities to overlap. What should be independent services become entangled contributors to a shared mess, with nobody feeling responsible for the whole and problems falling through the cracks.

Runtime Interference Creates Unexpected Failures

Services that share a database also share its resources. A long-running analytical query from the reporting service can lock tables that the order service desperately needs. A sudden spike in traffic to one service can exhaust connection pools for all others. A poorly optimized query can bring down the entire system.

You wanted isolation; you got a single point of contention. This "Shared Database anti-pattern," as documented by microservices.io, creates exactly these problems of resource contention and cascading failures.

The Database-per-Service Pattern

The solution to these challenges is conceptually straightforward: each service gets its own database. A service owns its data, stores it however it sees fit, and exposes that data to other services only through well-defined APIs.

This principle has a name: the database is an implementation detail. No other service should know or care what database technology you use, what your schema looks like, or how you structure your data internally. If Service B needs data from Service A, it asks Service A through an API, keeping the database invisible to the outside world.

This approach delivers several profound benefits:

Business Rules Are Enforced in One Place

The service that owns the data is the only one that can modify it. Every write goes through its logic, validation, and constraints. There's no backdoor access or circumvention possible. Rules are implemented once and applied consistently across the system.

Schema Changes Are Local

When you change your database schema, you change your service. Other services don't notice because they never saw the schema in the first place. You can refactor freely, optimize for new access patterns, or restructure entirely without coordinating with other teams.

Technology Choices Are Free

Each service can use the storage technology that best fits its needs. The order service might use a relational database, the search service could use Elasticsearch, and the cache might leverage Redis. No compromises, no lowest common denominator approaches required.

Teams Move Independently

No coordination is needed for internal changes. Each team owns its data and its release schedule. Autonomy becomes real, not just theoretical. Teams can iterate quickly, experiment with new approaches, and deploy on their own schedules.

Failures Are Isolated

When one service's database encounters problems, other services continue to operate normally. You can scale, maintain, and troubleshoot each database independently, reducing the blast radius of any single failure.

The primary cost of this approach is the need for explicit mechanisms for services to communicate and share information when necessary—APIs, messages, events. These require more upfront design than simply reading from a shared table. But they make the coupling explicit, versioned, and manageable. You know exactly what other services depend on, and you can evolve those contracts deliberately.

Yes, you lose cross-service transactions. But in practice, most systems that believe they need distributed transactions can be redesigned to work with eventual consistency and compensating actions. The flexibility gained is worth the trade-off. This is the Database-per-Service pattern, and it's fundamental to building autonomous, resilient systems.

Special Considerations for EventSourcingDB

Everything we've discussed applies to event stores just as much as to relational databases. Perhaps even more so, because the temptation to share is even stronger.

When you use EventSourcingDB, you store events that represent what happened in your domain. These events are rich with business meaning—they capture decisions, state transitions, and domain-specific facts. It's tempting to think: if all services could just observe these events directly, we'd have a beautifully integrated system with no need for additional communication infrastructure.

This is a trap.

The events in your EventSourcingDB are not just data; they are domain knowledge. They encode the internal workings of your service, the structure of your aggregates, the granularity of your state changes, and reflect how you've chosen to model your domain. When another service observes your events directly, it couples itself to all of these internal details—every field name, every event type, every structural decision.

Consider what happens when you need to refactor. You want to split one event into two for better granularity. You want to rename a field to match evolved domain language. You want to restructure your aggregate boundaries. If other services are observing your events directly, every internal change becomes a breaking change. You've traded one form of coupling (shared tables) for another (shared event streams). The problems are the same, just dressed in different clothes.

The EventSourcingDB is an implementation detail of your service. It should not leak to the outside world—not through direct database access and not through the observe endpoint. Allowing external services to observe your internal events couples them to the shape of those events, your internal domain model, and decisions that should be yours alone to change.

Domain Events vs. Integration Events

This brings us to a crucial distinction: not all events are the same.

Domain events are the events you store in your EventSourcingDB. They capture what happened within your service's bounded context. They're optimized for your internal needs—rebuilding aggregate state, updating read models, triggering internal workflows. They might be fine-grained, technically detailed, or structured in ways that only make sense within your service.

Your domain events might include OrderLineItemAdded, OrderLineItemRemoved, OrderLineItemQuantityAdjusted, and a dozen other granular facts that help you reconstruct the complete state of an order. This level of detail is valuable internally. It lets you understand exactly how an order evolved over time and build projections that answer any question about order history.

But other services don't need this granularity. They don't care about individual line item adjustments; they care that an order was placed and is ready to be processed.

Integration events are what you publish to the outside world. They represent facts that other services need to know about, but in a form designed for external consumption. Integration events have stable structures, explicit versioning, and clear contracts. They're your public announcements, carefully crafted for your audience.

An integration event might be OrderConfirmed, summarizing everything other services need to know: the order ID, the customer, the total amount, the shipping address. It's a deliberate, curated view of what happened, designed for consumers who don't share your internal context.

The EventSourcingDB serves the inside of your service. It's the foundation for your write model and your read model. Your projections observe it. Your event handlers react to it. Components within your service use the observe endpoint to stay in sync. This is exactly what EventSourcingDB is designed for.

But when you need to communicate with other services, you don't expose your EventSourcingDB. Instead, you publish integration events through an explicit channel—a message broker, an API, a dedicated event bus. You decide what to publish, when to publish it, and in what format. You control the contract. You can change your internal domain events freely, as long as the integration events you publish remain stable.

This separation gives you the best of both worlds. Internally, you have the full power of event sourcing: complete history, replay capability, flexible projections. Externally, you have clean contracts that you can version and evolve independently of your internal implementation. Your domain events are your private journal; your integration events are your public announcements. Keep them separate.

The Path Forward

When the question comes up ("Can we just use one EventSourcingDB for everything?"), the answer is clear: no. A shared EventSourcingDB reintroduces all the problems we've discussed for shared databases. Services would observe each other's domain events directly. Internal event changes would affect multiple consumers. The boundaries between services would blur. Domain knowledge would leak across service boundaries. Business rules could be bypassed. Ownership would become unclear.

Each service gets its own EventSourcingDB. Each service owns its domain events. Each service uses those events internally for its write model, its read models, and its projections. And each service publishes integration events explicitly when other services need to know that something happened.

This is not a limitation; it's what makes service-based architectures work. It's what keeps teams autonomous and systems evolvable. The EventSourcingDB is a powerful foundation for each individual service. But like any database, it belongs to that service alone.

If you're building a service-based architecture with EventSourcingDB, start by drawing clear boundaries. Identify which service owns which domain. Design the integration events that flow between services. Keep each EventSourcingDB invisible to the outside world. Your future self—and your colleagues—will thank you.

"The database is an implementation detail. No other service should know or care what database technology you use, what your schema looks like, or how you structure your data internally."
— Core principle of the database-per-service pattern

Source: https://docs.eventsourcingdb.io/blog/2025/12/11/one-database-to-rule-them-all/