Microservices: Shackles on your feet

March 14, 2026By Emirhan YILDIRIM

Microservices: Shackles on your feet

Microservices vs Monolith: When Splitting Your App Makes Things Worse

"Software engineering is more than just centering a div." — howtocenterdiv.com
TLDR: You don't need microservices. You need better module boundaries. Split only when teams are truly independent, scaling needs are night-and-day different, or your headcount is pushing 150+. Before any of that — fix the code, draw real boundaries inside the monolith, set up tracing. Microservices don't fix a messy codebase. They just spread it across the network and make it someone else's 3 AM problem. When you do split, use a strangler fig. Not a rewrite. Never a rewrite.

You divided your monolith into 14 services. Congratulations. As your phone buzzes at three in the morning, you now have fourteen deployment pipelines, 14 log streams, and 14 locations for things to quietly die.
Was it worth it? For most teams: no.

The complaint is usually wrong

Teams don't say "we need microservices." They say "deployments keep breaking everything" or "we can't scale." Those sound like architecture problems. They almost never are.
What people sayWhat's actually broken
Deployments break everythingNo tests. Tight coupling.
Teams stepping on each otherNo module boundaries
Can't scaleIt's the database. Always.
Codebase is unreadableYears of shortcuts with no conventions
Splitting a messy monolith into services doesn't clean it up. You just moved the mess somewhere harder to see.

OK, so when do they make sense

When domains truly don't touch each other

code
Payment Service → invoices, transactions, billing Content Service → articles, media, metadata Auth Service → sessions, tokens, identity
Payments and content have nothing to say to each other at the data layer. Splitting them here is cheap. Each team deploys on their own schedule. Nobody's waiting on anyone.

When scaling needs are completely different

yaml
1services: 2 api: 3 replicas: 3 4 resources: 5 limits: 6 cpus: "0.5" 7 memory: 256M 8 9 transcoder: 10 replicas: 1 11 resources: 12 limits: 13 cpus: "4.0" 14 memory: 8G
A login endpoint and a video transcoder in the same process is just wasteful. One needs bursts of CPU, the other needs almost nothing. Split makes sense.

When you hit 150+ engineers

At some point Conway's Law just wins. Your architecture will look like your org chart whether you design it that way or not. Past ~150 people, waiting for a synchronized release is a real problem that microservices actually solve. Below that, it's just overhead.

Where it gets painful

The network is not your friend

go
1// Monolith. Fast. Simple. Never lies you. 2func GetUserOrder(userID string) (*Order, error) { 3 user := userRepo.Find(userID) 4 order := orderRepo.Latest(user) 5 return order, nil 6} 7 8// Microservices. Now everything can fail. 9func GetUserOrder(userID string) (*Order, error) { 10 ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) 11 defer cancel() 12 13 user, err := userClient.Find(ctx, userID) 14 if err != nil { 15 return nil, fmt.Errorf("user service down: %w", err) 16 } 17 18 order, err := orderClient.Latest(ctx, user.ID) 19 if err != nil { 20 return nil, fmt.Errorf("order service down: %w", err) 21 } 22 23 return order, nil 24}
No new features. Just new failure modes. Timeouts, retries, circuit breakers — all of it now lives in your codebase. Multiply by every cross-service call you have.

One SQL transaction becomes five async steps

sql
-- Monolith. Atomic. Safe. Boring. BEGIN; UPDATE inventory SET stock = stock - 1 WHERE product_id = 42; INSERT INTO orders (user_id, product_id) VALUES (101, 42); COMMIT;
Inventory and orders have separate databases now, so that's gone. Instead:
code
11. Order Service: create order, status=PENDING 22. Publish: OrderCreated 33. Inventory Service: reserve stock 44. Publish: StockReserved 55. Order Service: status=CONFIRMED 6 7Step 3 fails? 8→ Publish StockFailed 9→ Order Service: cancel order 10→ Hope the compensating transaction doesn't get lost 11→ Hope the queue doesn't drop the message 12→ Good luck
One SQL commit turned into five distributed steps, two compensating transactions, and an event bus you now have to maintain. You will debug a broken Saga at 2 AM. Not maybe. When.

Observability is a week of setup before it's useful

User reports a failed order. In a monolith, you grep one file.
code
API Gateway → Order Service → User Service → Inventory Service Message Queue Notification Service → Email Provider
Without distributed tracing across all of these, you're just guessing which hop failed. So before splitting anything, you need Jaeger or Tempo for traces, Prometheus and Grafana for metrics, Loki or ELK for centralized logs. None of it takes an afternoon.

Local dev is now a DevOps problem

bash
1# Monolith 2git clone && npm install && npm run dev 3 4# Microservices 5docker-compose up 6# ...2 minutes pass 7# service-a won't start because service-b isn't healthy 8# service-b crashed because the volume mount path is wrong 9# you find the GitHub issue from 2022 — closed, won't fix 10# you spend 45 minutes on setup instead of writing code
This kills junior developers. Onboarding goes from "clone and run" to a half-day of Slack messages and tribal knowledge.

The thing nobody talks about: Modular Monolith

Pick one: big ball of mud, or 30 microservices. That's the false choice most teams think they're making.
There's a third option. Keep one deployable. Draw real boundaries inside it.
code
1/src 2 /modules 3 /payments 4 payments.controller.ts 5 payments.service.ts 6 payments.repository.ts 7 /inventory 8 inventory.controller.ts 9 inventory.service.ts 10 /notifications 11 notifications.service.ts
Hard rule: modules talk through exported interfaces only. No sneaking into another module's utils folder.
typescript
// This is fine import { PaymentService } from '@modules/payments'; // This is how you end up with spaghetti import { calculateTax } from '@modules/payments/utils/tax-calculator';
Same database. Same deploy. Clean boundaries. When you eventually need to pull payments into its own service, the boundary is already there. The extraction becomes boring work instead of archaeology.

If you do split, do it slowly

Big bang rewrites fail. Use a strangler fig.
typescript
1// Before: facade calls in-process module 2class PaymentFacade { 3 constructor(private svc: PaymentService) {} 4 5 async charge(amount: number, customerId: string) { 6 return this.svc.charge(amount, customerId); 7 } 8} 9 10// After: same interface, now calls HTTP 11class PaymentFacade { 12 async charge(amount: number, customerId: string) { 13 const res = await fetch('https://payments.internal/charge', { 14 method: 'POST', 15 body: JSON.stringify({ amount, customerId }), 16 }); 17 return res.json(); 18 } 19}
Callers don't change. No feature freeze. No "we need two weeks to cut over." The facade absorbs the risk.

The actual decision

Four questions. Be honest.
code
11. Can this deploy without coordinating with other teams? 2 NO → don't split. 3 42. Does it have a genuinely different scaling profile? 5 NO → reconsider. 6 73. Do you have distributed tracing and centralized logs running? 8 NO → build that first. 9 104. Does your team have the bandwidth to own this boundary end-to-end? 11 NO → wait.
SituationWhat to do
Early-stage, small teamMonolith
Growing team, friction startingModular Monolith
Separate teams, clean domainsSplit selectively
150+ engineers, independent releases neededMicroservices
You read a blog post about NetflixGo for a walk
Netflix got to microservices after years of scale, with dedicated platform teams and hundreds of engineers. That's not where they started. That's where the pain eventually pushed them.
Stop borrowing solutions to problems you don't have yet.