Most digital projects that fail do not fail because of poor technical execution. They fail because they built the wrong product. Or the right product for the wrong user. Or the right product too early, before validating that anyone truly wanted it.
The concept of an MVP has been in every product team's vocabulary for 15 years. But in practice it is consistently misunderstood: people use it as a synonym for "small product" or "cheap version". It is neither.
"An MVP is not a smaller version of your final product. It is the simplest experiment that can validate or disprove your business hypothesis."
Before building: write the hypothesis
The most common mistake is to start with features. "We need a dashboard, notifications, automated onboarding and an API for integrations." All of that may be correct, but it is not the starting point.
The starting point is a hypothesis. And it has a specific shape:
"We believe [user type] has [specific problem]. We will solve it with [concrete solution]. We will know it works if [metric] reaches [threshold] within [time frame]."
Real example: "We believe SME purchasing managers struggle to track approved budgets across departments. We will solve it with a shared board and deviation alerts. We will know it works if 40% of active users use it at least three times a week after one month."
Look at what that does: it defines the user, the problem, the solution and the success metric before a single line of code exists. If you cannot write that in one paragraph, you still do not know what you are building.
The impact/effort matrix trap
Most teams prioritise features with an impact/effort matrix. It is useful, but it has a design flaw: it leaves out the most important variable.
The missing variable is: does this feature validate the core product hypothesis?
A feature can have high impact and low effort and still be wrong for the MVP if it does not help answer the central question. An analytics dashboard may be easy to build and strongly requested, but if your core hypothesis is about whether users complete the main flow, that dashboard tells you nothing.
The "one job" rule
There is one question that cleans up scope better than any prioritisation matrix:
What can a user do well with this version?
Only one thing. If the answer includes "and also", you have too much scope. The MVP should do one job so well that the user comes back for it. Everything else can wait.
Examples of a well-defined "one job":
- Create a project budget and share it with a client for approval.
- See how much time the team spent on each client this week.
- Publish a job opening and receive applications in one place.
Notice: those are not features. They are complete jobs with user, action and outcome. If you cannot describe the MVP that way, you will build a collection of disconnected features that serve no one particularly well.
"MVP scope is not defined by what you think users need. It is defined by what you need to learn to know whether the business works."
The MVP does not have to be software
One of the most expensive mistakes is assuming the MVP must be a working product. In many cases, the hypothesis can be validated before writing a single line of code.
Two techniques that work well:
- Wizard of Oz. The user believes they are using an automated system, but behind the scenes a person is doing the work manually. It validates whether users will actually go through the flow before you build it. Zappos used this to validate the shoe e-commerce model without holding any stock.
- Concierge MVP. You do the work manually and personally for your first users without automating anything. The goal is not to scale: it is to learn what is truly needed. If the user is not willing to pay for the manual service, they will not pay for the automated product either.
These approaches do not always apply — it depends on the product type and market. But before building, it is worth asking: is there any way to validate the hypothesis without code?
Learning metrics versus vanity metrics
Once the MVP is live, the temptation is to measure what is easy: signups, visits, downloads. Those are vanity metrics. They look good in a presentation, but they do not tell you whether the product works.
Learning metrics measure real behaviour against your hypothesis:
- Week-2 retention. Of the users who signed up, how many came back the following week? If it is below 20%, the product is not solving anything urgent.
- Main-flow completion. What percentage of users completes the "one job" that defines the MVP? If it is low, there is friction or the problem was not urgent enough.
- Usage frequency. How many times per week does your target segment use the product? If the use case is daily but real usage is weekly, something is off.
- Qualitative NPS. Not the score itself: the open answers from people who give you 9-10 and those who give you 1-3. Extremes tell the truth.
Signals that scope is blowing up
There are repeating patterns in projects where the MVP turns into a 12-month product:
- "We need roles and permissions from day one." In 90% of MVPs you only have one user type. Roles arrive when teams are actually using the product.
- "The client already asked for ERP integrations." An integration in the MVP is almost always scope creep. Clients ask for integrations with every product; that does not mean they need one to use yours.
- "It must be multilingual from day one." If you do not even have one customer yet, language is the least of your problems.
- "The design needs to be perfect before launch." Design matters. An MVP with careless UI is counterproductive. But "perfect" does not mean complete. It means clear and functional.
When to expand scope
The MVP has done its job when you have empirical feedback on the hypothesis. Not when it "works well", but when the data says: user X does Y in way Z with a frequency that justifies more investment.
At that point, and only at that point, it makes sense to expand scope, add features and think about integrations. Before that, you are building on assumptions.
We help teams define what to build in the first version of a digital product: hypothesis, scope, metrics and make-vs-buy decisions. No methodology masterclasses, just the judgement that comes from doing this repeatedly.
Let's talk →