Microservices Won't Save Your Startup
Everyone's splitting their app into microservices because Netflix did it. But you're not Netflix, and that architecture decision is probably costing you more than it's saving. Here's when a monolith is the smarter choice.
Somewhere around 2018, the software industry collectively decided that monoliths were bad. Legacy. Embarrassing. The kind of thing you whisper about in interviews but never put on your resume.
Microservices became the default. Split everything into small, independent services. Give each one its own database. Deploy them separately. Use Kubernetes to orchestrate the whole thing. It’ll be beautiful.
Except for most companies, especially startups and growing businesses, it’s not beautiful. It’s a distributed systems nightmare wearing a trendy architecture diagram.
The Netflix problem
The microservices movement got its credibility from companies like Netflix, Amazon, and Google. These companies operate at a scale most of us will never touch. Netflix handles hundreds of millions of streaming sessions. Amazon processes millions of transactions per second. At that scale, splitting your system into independently deployable services makes perfect sense.
The problem is that the industry took lessons from companies serving hundreds of millions of users and applied them to products serving hundreds. That’s like looking at how Boeing manufactures a 747 and deciding your furniture workshop needs the same assembly line.
Architecture should match your actual problems, not your aspirational ones. If your app handles a few thousand requests per minute, a well-structured monolith will serve you just fine. Probably better than fine.
The hidden cost nobody talks about
Here’s what the microservices pitch conveniently skips over: the operational tax is enormous.
With a monolith, you deploy one thing. You debug by reading logs in one place. You trace a request through a single process. When something breaks at 2 AM, you know where to look.
With microservices, a single user action might touch five services. Each one has its own deployment pipeline, its own logs, its own failure modes. When something breaks, you’re stitching together distributed traces across services, trying to figure out which one dropped the ball. You need service meshes, API gateways, circuit breakers, and a team that understands all of it.
We’ve seen startups with ten engineers spending 40% of their time on infrastructure instead of features. Not because they have complex scaling problems, but because they chose an architecture that demands constant infrastructure investment. That’s not engineering - it’s overhead disguised as progress.
When teams lie to themselves
The justification usually sounds reasonable on paper. “We need independent deployability.” “Teams need to own their services.” “We’re preparing for scale.”
Let’s unpack these.
Independent deployability sounds great until you realise that most microservices in most companies are tightly coupled anyway. Change the user service, and suddenly the order service, the notification service, and the analytics service all need updates too. You haven’t gained independence - you’ve just spread your coupling across network boundaries, which is strictly worse than spreading it across function calls.
Team ownership is a valid organisational pattern, but you don’t need microservices to achieve it. A well-structured monolith with clear module boundaries and strong code ownership rules gives you the same organisational clarity without the operational tax. Spotify’s famous “squad model” worked because of how they organised people, not because of how they organised services.
Preparing for scale is the most dangerous one. You’re spending real engineering time today solving problems you might have in two years. If you’re a startup, you might not even exist in two years. The time you spend setting up Kubernetes clusters and writing service-to-service authentication is time you’re not spending on finding product-market fit. And that’s the thing that actually determines whether you survive.
The monolith isn’t the enemy
Somewhere along the way, “monolith” became a dirty word. It shouldn’t be. A monolith is just an application deployed as a single unit. It can still have clean architecture, clear module boundaries, separate concerns, and good test coverage. A well-built monolith is dramatically easier to develop, deploy, debug, and reason about than a poorly-built microservices system.
The real enemy isn’t the monolith. It’s the big ball of mud - code with no structure, no boundaries, no separation of concerns. But splitting a big ball of mud into microservices doesn’t fix the underlying problem. It just gives you a distributed big ball of mud, which is worse in every measurable way.
Fix the structure first. Then decide if you need to split.
The modular monolith: the answer nobody wants to hear
The unsexy middle ground is the modular monolith. One deployable unit, but internally structured into well-defined modules with clear boundaries, explicit interfaces, and minimal coupling.
This gives you most of the benefits people chase with microservices:
- Clear ownership. Each module has a team or owner.
- Independent development. Teams work on their modules without stepping on each other.
- Future extractability. When (if) you actually need to split a module into a separate service, the boundaries are already clean.
The difference is that you skip the distributed systems tax. No service mesh. No distributed tracing. No eight-service deployment choreography for a single feature.
At IndieStudio, this is the architecture we recommend to most of the startups and growing companies we work with. Not because microservices are bad, but because they solve problems that most companies don’t have yet. And by the time you do have those problems, you’ll know exactly which module needs to become its own service.
When microservices actually make sense
To be fair, there are real use cases:
- Genuinely different scaling requirements. Your video transcoding service needs GPU instances, but your API layer doesn’t. Split them.
- Polyglot requirements. Part of your system genuinely needs to be in Python for ML, and part in Go for performance. Fine.
- Large organisations (100+ engineers). When coordination costs across a single codebase become the bottleneck, splitting makes sense.
- Regulatory isolation. Payment processing that needs PCI compliance can be isolated from the rest of your system.
Notice a pattern? These are specific, concrete problems. Not “we might need to scale someday” or “this is how modern software is built.”
Make the boring choice
The best architecture for your company right now is probably the boring one. A well-structured monolith, deployed simply, with good tests and clear module boundaries. It’s not glamorous. It won’t impress anyone at a conference. But it’ll let your team ship features instead of fighting infrastructure.
Save the microservices for when you have the problems that microservices actually solve. Your future self - the one who’s dealt with a distributed systems outage at 3 AM because a service mesh misconfiguration cascaded across twelve services - will thank you.
Build the simplest thing that works. Then evolve it based on real constraints, not imagined ones.