Your MVP Needs a Kill List, Not a Feature Roadmap
Most MVPs fail because they are not minimal, not focused, and not testing the right thing. The fix is not better prioritization. It is being ruthless about what to kill before you build.
Most MVPs are lying.
They get called “minimum viable products,” but they’re usually just smaller versions of the full product the team wanted to build anyway. Same assumptions, same complexity, same wishful thinking, just with a few features cut for speed.
That is not an MVP. That is a delayed expensive mistake.
A real MVP is not supposed to prove that you can build the product. It is supposed to prove that the product deserves to exist.
That sounds obvious, but teams ignore it constantly. Founders want something polished enough to feel real. Stakeholders want “just a few” extra workflows. Engineers add flexibility for future use cases that do not exist yet. By the time the thing ships, nobody is testing one sharp hypothesis anymore. They are launching a bundle of guesses.
Then the feedback is muddy, the scope starts creeping, and six months later everyone is arguing about conversion instead of admitting the product never had a clean test.
The problem is not ambition. It is lack of subtraction.
Most teams have a roadmap for what to build.
Very few have a kill list for what must not make it into version one.
That is the difference between an MVP that teaches you something and an MVP that just burns budget with a login screen.
What an MVP is actually for
An MVP has one job: answer the biggest business question with the least amount of engineering.
Not ten questions. One.
Examples:
- Will customers trust AI-generated first drafts enough to use them in their workflow?
- Will brokers upload deal data if the intake process is simpler?
- Will operations teams pay for a dashboard that replaces a weekly spreadsheet ritual?
- Will users come back for the insight, or only for the novelty?
If your MVP cannot be tied to one uncomfortable question like that, it is probably too broad.
A lot of teams say they are testing product-market fit when they are really testing whether their team can coordinate design, engineering, copy, analytics, onboarding, billing, permissions, and integrations all at once. That is not product validation. That is operational self-harm.
The kill list is where the real strategy lives
Before building anything, write down three lists:
1. What must be true for this product to work
These are your core assumptions. Be specific.
Not: “users will find it useful.”
Instead:
- users will complete the core task without onboarding help
- users will accept AI output with light editing instead of rewriting everything
- teams will tolerate one manual review step if it saves enough time
- decision-makers will pay for time saved, not just novelty
2. What must be built to test those assumptions
This list should be painfully short.
Only include functionality that directly helps test the assumption. If a feature does not sharpen the test, it does not belong in the MVP.
3. What is tempting, valuable later, and banned from version one
This is the kill list.
This is where you put:
- advanced permissions
- custom reporting
- multiple user roles
- edge-case workflows
- admin panels no customer asked for
- settings screens built for imaginary future flexibility
- integrations that make the product feel “enterprise-ready”
- design polish that hides weak product thinking
The kill list matters because teams are terrible at spotting scope creep in real time. If you do not ban things upfront, they sneak back in wearing nicer language.
“We just need basic roles.” “This integration is critical for adoption.” “We should support both flows in case users work differently.”
No. Pick one path. Test one thing. Learn something real.
Most MVPs are overloaded with fake risk reduction
This is one of the most common anti-patterns we see.
A team says they are adding more features to reduce launch risk. In reality, they are increasing product risk while reducing emotional risk.
Emotional risk says:
- what if users think it looks too simple?
- what if investors think it feels incomplete?
- what if the team feels embarrassed by how narrow it is?
Product risk says:
- what if we spend three months building the wrong thing?
- what if we do not learn why users are churning?
- what if we mistake feature activity for actual demand?
Teams often protect themselves from the first category and ignore the second.
That is backwards.
A narrow MVP can feel underwhelming in a demo. Good. It should. If the product idea only works when surrounded by layers of supporting features, that is useful information. Better to find that out early than after a full build.
The right MVP usually feels a little embarrassing
That discomfort is healthy.
If version one already feels comprehensive, it is probably too big.
The best MVPs usually have a few traits in common:
They solve one painful job
Not a category of jobs. One actual painful thing.
They rely on manual operations behind the scenes
This is not cheating. It is intelligence.
If a human can fake part of the system while you validate demand, do that. A lot of teams automate too early because they want the product to look scalable before it has earned the right to scale.
They create a strong signal
A strong signal means the outcome is interpretable.
If users are not engaging, do you know why? If they are engaging, do you know what they value? If they drop off, can you tie it to a specific moment?
The more moving parts you add, the weaker that signal gets.
Anti-patterns that quietly ruin MVPs
Building for three customer types at once
Pick one.
If you are trying to serve founders, operators, and enterprise buyers in version one, you do not have an MVP. You have a committee.
Adding architecture for scale before usage exists
You do not need the infrastructure version of confidence theater.
A simple, well-structured system beats an overbuilt stack designed for traffic that has not shown up yet. At IndieStudio, we push teams toward enough structure to move cleanly, not enough complexity to feel important.
Measuring activity instead of validated behavior
Page views and signups are weak signals on their own.
Did users complete the core task? Did they come back? Did they invite teammates? Did they ask for the product to become part of a real workflow?
That is the territory where demand starts becoming real.
Letting stakeholder opinions outrank the test
This one kills good MVPs all the time.
Someone senior wants a dashboard. Someone else wants branding refinements. Another person wants analytics for every edge case. Soon the team is serving internal comfort instead of external learning.
If a request does not improve the test, it should lose.
A better way to scope an MVP
Here is the framework we keep coming back to.
Step 1 - Write the sentence you need the MVP to prove
Example: “Operations teams will use an AI assistant to turn messy inbound requests into structured next actions if the result is reviewable in under two minutes.”
That is specific enough to build against.
Step 2 - Remove anything that does not help prove it
Be brutal.
Step 3 - Ask how much of the experience can be manual
If a person can do part of the workflow behind the scenes for the first ten customers, that is usually smarter than building a full automation layer upfront.
Step 4 - Define the success signal before launch
What behavior would count as evidence? Repeat usage? Time saved? Conversion to paid? Team adoption within a week?
If success is undefined, interpretation turns political fast.
Step 5 - Ship before everyone feels ready
Readiness is overrated. Clarity matters more.
The point is not to look small. It is to learn fast.
A lot of founders hear “keep the MVP small” and translate it into “make the product weak.” That is not the idea.
The goal is to make the test sharp.
A sharp MVP can be ugly, manual, narrow, and extremely effective. A bloated MVP can be polished, expensive, and strategically useless.
If you are building version one right now, stop asking what else should go in.
Ask what needs to die.
That is usually where the real product thinking starts.
At IndieStudio, we usually scope MVPs by forcing the core hypothesis into one clean test and cutting everything that blurs the signal. That is less emotionally satisfying than a big roadmap, and much more useful.