A pragmatic approach to migrating from monolith to microservices that focuses on boundaries first, gradual extraction, and avoiding common pitfalls that turn service migration into distributed chaos.
When I first tried to "go microservices", I did it for the most honest reason developers ever do anything: because it sounded like the grown-up move. Our monolith was getting bigger, deploys were slower, codebase more sticky, and a couple of endpoints were hot and f*cking slow. One incident turned into a full-team debugging party. And somewhere in my head, "I think microservices were the obvious answer" like upgrading from a single server to real distributed services.
So I did what everyone does when they're tired and ambitious: started splitting things. New repo, new service name, new pipeline, new database, and new clean boundary. For a few days, it felt amazing (like I had finally escaped :D). Then reality showed up and punched us hard:
- local dev got harder to set up
- debugging became distributed archeology
- staging became "work in service A but not in service B"
- tracing didn't exist yet, so everything was "maybe network issue?"
- deployment multiplied
- auth became a mess
- versioning became a new job
And much moreeeee.
The monolith wasn't gone, it just moved into the network. That's when I learned the unpopular truth: microservices don't remove complexity, they relocate it. And if you migrate because you're "annoyed", you'll regret it.
So I rewound. Not to abandon services forever, but to adopt a migration plan that doesn't destroy your ability to ship. A plan so boring it actually works.
The problem was never the monolith
It was the lack of boundaries inside it. Most monolith pain comes from two things:
- everything can touch everything
- changes are not isolated
So the goal isn't "split into services". The goal is make boundaries and only then decide what deserves to become a service. If you can't draw boundaries in a monolith, you won't magically draw them across repos.
The boring plan: 6 steps that avoid regret
1. Make the monolith modular first (a modular monolith)
Before I extracted anything, I forced structure:
- separate modules by domain (not by folder vibes)
- enforce boundaries (imports, layering rules)
- stop sharing random utilities everywhere
- define what data each module owns
This sounds like a refactor that pays rent.
2. Pick the first services based on pain and isolation
Most teams pick services based on what's "important". I picked based on what's isolatable. Good first candidates usually have:
- clear inputs/outputs
- fewer dependencies
- less shared data
- high traffic or distinct scaling needs
- a team that can own it
Bad first candidates:
- "core domain logic" touching everything
- workflows that span the entire product
- anything that requires shared transactions across many tables
My first extraction attempt failed because I picked a "central" domain.
3) Use the Strangler pattern: route traffic gradually
Instead of cutting over in one dramatic weekend, I did this:
- keep monolith endpoint as the "front door"
- route a small percent of traffic to the new service
- ramp up gradually
- keep a kill switch to fall back to monolith fast
This changed the migration from "big bang" to "progressive rollout."
Why it matters: You're not migrating code. You're migrating risk.
4) Freeze the old behavior with contract tests
The easiest way to break things is to rewrite logic with confidence. So I wrote contract tests around the monolith behavior:
- request/response shapes
- edge cases
- error codes
- expected side effects
Then I ran the same tests against the new service. This prevented the classic migration bug: "We improved it... and now it behaves differently."
5) Split data the boring way: keep DB shared at first (sometimes)
This part is controversial, but it saved my migration. Instead of going "separate database per service" on day one, I staged it:
Phase 1: service has its own code + deploy + scaling, but reads from the existing DB (carefully).
Phase 2: introduce a dedicated schema or tables with clear ownership.
Phase 3: move to a separate database only when you have events/replication/ownership clear.
This avoided building a full distributed data architecture prematurely. Because the moment you separate DBs, you inherit:
- eventual consistency
- dual writes
- eventing
- reconciliation
- backfills
- debugging async flows
A service boundary is already a big change. A data boundary is the real point of no return.
6) Build the boring platform stuff earlier than you want
Microservices only feel "clean" when your platform is mature. So I had to invest in:
- centralized logging
- tracing (or at least correlation IDs)
- consistent authN/authZ strategy
- shared observability dashboards
- deployment automation
- versioning rules between services
Otherwise your migration becomes: "We moved to microservices and now we can't debug anything."
The "services without regret" decision rule
After the boring plan, I changed my criteria. A module should become a service only if it needs at least one of these:
- independent scaling (traffic/compute profile differs)
- fault isolation (it fails and shouldn't take everything down)
- team ownership boundaries (real org scaling)
- security boundaries (sensitive workloads)
If the reason is "the monolith feels messy", the fix is not services. The fix is boundaries, standards, and refactoring.
The best outcome wasn't "microservices"
It was boring deployments.
When the plan worked, something surprising happened: We didn't feel like we "migrated to microservices." We just slowly stopped suffering.
- deploys got smaller
- incidents got narrower
- teams gained ownership
- scaling became targeted
- the codebase became less scary
That's the goal.





Thank you for reading this article, hope it's helpful 📖. See you in the next article 🙌

Comments
Please log in or register to join the discussion