This post is part of an ongoing series about Startup Validation called Testing Tuesdays.
Gain the edge with the hacks, tools and smarts to build startups the right way by subscribing below (if you haven’t already):
In the last edition of #TestingTuesdays we broke down how you can build a NoCode MVP for SaaS products.
Just building an MVP is not an experiment, however. Unless you are testing for feasibility (can we build it?), you need a definition of success, the right metrics, and to get your MVP 'out there'.
In this week's Testing Tuesdays, we look at how to set up a validation experiment with a NoCode MVP.
When (not) to use a NoCode MVP?
NoCode is doing to code what Photoshop did to traditional art supplies. It is more forgiving, and lets you build something 10x faster. This widens the NoCode MVP's use cases compared to coded MVPs.
Listed below are situations when we might consider using a NoCode MVP for validation.
Validation Levels:
Opportunity-fit: Not recommended.
Problem-fit: There are more efficient ways.
Solution-fit: Perfect!
Commercial-fit: Yes. Stripe is your friend.
Product-Market-fit: Unlikely.
Scale-fit: No.
Existing solution is a doc or spreadsheet.
The easiest SaaS MVPs are better versions of a document or spreadsheet that is already in use.
How to make it better? Interlinking data, collaboration, integration with other tools – anything that improves workflow. See last weeks post for more.
Building a Startup Funnel to see what sticks.
If you are a builder, build! There's a reason accelerators and incubators have funnels filled with promising ventures: they are playing the numbers game.
With NoCode, so can you. As an (aspiring) early stage founder, you can now build a bunch of MVPs at once, in search for that holy grail of traction.
Looking for a Quick & Dirty solution.
The average NoCode MVP won't cost you more time to build, than a decent Smoke Test with A/B-variations and a proper landing page. That's not bad for a fully functioning product.
Looking for a Cheap Solution.
Most NoCode tools have a 'forever free' tier. The paid ones rarely charge above $20/month. The average developer will charge you more per hour than your NoCode MVP can cost you total.
Sidenote: Startup Studios I have worked with tend to invest a bit more in an MVP, because they don't want to hit the market with something below a certain quality standard.
Their reputation matters for their business. Depending on your target audience and circumstances, opt for the more expensive method.
0. Before you MVP
Startup success can be a side-effect of simply launching a lot. Build enough products and you're bound to hit the jackpot at some point.
From a macro-perspective this 'build first'-mentality is great: More MVPs flood the market, so statistically more unicorns are born.
The problem is: 'just build' is not always the best way forward. Founders on a seed-budget f.i., want to be more deliberate and focus their efforts on a single product.
To make sure you are building the right thing, I recommend validating the following before you MVP:
Macro trend that is changing customer needs and therefore the market.
Customer Job(s) to be Done and alternative solutions they're considering.
Here's a great resource for defining Jobs to be Done and validating Problem-fit: https://bit.ly/3kgIRj9, via @lennysan)
Problem-fit, ideally. If absent, consider running an interview before or in parallel to your MVP experiment. Insights can be used to pivot on the fly.
Where to find your customers. If you do not yet know how to reach potential users, there is no use in building an MVP.
Examples of Good Beta Acquisition Channels:
B2C: Communities, Groups, E-mail Lists, Niche-influencers (in network), ...
B2B: LinkedIn, Events, Groups, ...
Here's a list of 100+ launch channels: https://bit.ly/3bYFmLj, by @angezanetti.
1. Know why you are Testing
Any experiment setup begins by defining one or multiple hypotheses: What are we assuming to be true, and looking to (in)validate. Be clear and be specific. Don't try to test everything at the same time, this will muddy your results.
Types of validations you can run using a NoCode MVP:
Desirability
Feasibility
Viability
Typical critical assumptions for a NoCode MVP:
We believe that the customer's most critical problem is [x]
We believe that [y] is the customer's preferred solution to [x]
We believe that we can build [y] by ...
(We believe that we can deliver [y] through [a, b, c] channels.)
We believe that customers are willing to pay us enough to solve [x] with [y], to make a profit.
For an in-depth post about defining critical assumptions, read #TT02
2. Experiment Setup
If the critical assumptions define why you test, the Experiment Setup defines how you test. It provides a rough outline of what you are going to do to (in)validate the critical assumptions.
Experiment Setups are specific to your chosen validation method. For a NoCode MVP your setup should at least include:
Start & End Date / Runtime
Clear definition MVP features & functionalities
Roughly outline the NoCode components and their connections.
Target Audience
Acquisition Channels & Strategy
F.i. decide: Do you launch in public, or an invite-only beta?
Analytics: How will you obtain your metrics?
3. What to Measure
To leave no question about the outcome of your experiment, make sure that every outcome is measurable. The two ways we make sure outcomes are measurable are the definition of success and the one metric to measure.
Definition of Success
Every critical assumption should come with a clear definition of success. F.i.: for the assumption:
"A2: We believe that customers will pay us to solve [x] with [y]."
the definition of success might look something like:
"We assume A2 to be validated, if [n]% of customers choose the paid tier."
One Metric to Measure (OMM)
Every Definition of Success needs a pre-defined metric, a One Metric to Measure (OMM), that tells us what we will specifically measure.
In the above example focused on paid customers, your OMM should probably be conversion rate (%).
If your goal is to test interest in a solution, click-throughs (%) and sign-ups (%) might do the trick.
Your OMMs will inform your experiment conclusions. Choose them wisely.
Optional Metrics
That being said, the more you measure, the more you know – though be careful not to get lost in information overload. Non-crucial, but nevertheless informative metrics are optional metrics.
Some optional metrics that I like:
On-page Heatmaps
Bounce Rate
Social Shares (Word of Mouth)
Buyer's Journey Metrics s.a.: Abandoned Carts, Direct Referrals.
4. Run the validation and gather data.
Got everything set up and ready to go? It's time to put it to the test. The moment of truth. It's also the moment cold feet set in.
Suddenly, your mind dreams up a bunch of excuses for why your MVP isn't ready yet. It wants you to be perfectionistic. More informed. Better prepared. Do not give in to this feeling though. Just launch.
If you are testing for feasibility, building the MVP is part of your validation. For most NoCode MVPs this will be the case, which is why I only address it now.
Build in Public
If you can, build in public. Firstly you'll attract an early audience. Secondly, people will gladly give you tips and make your MVP better.
Share your findings and progress on social media. Perhaps even consider live-streaming or recording yourself whilst building and launching your MVP.
Either way – go for it. Stress-test everything first, then just put it out there.
Set a Cadence
Create a cadence of checking and evaluating your OMMs. Having fixed intervals between data-loggings gives you a better sense of vectors such as growth and churn later. Output into an interactive format that can be used for analysis after. Spreadsheets are OK, but pen and paper is probably too 1970.
5. Evaluate and Decide: Launch / Pivot / Kill
Ever played that game F-ck, Marry, Kill? Yeah, me neither.
A game I play all the time though, is Launch, Pivot, Kill. Usually at the end of a Validation Run. It is a game to decide whether your MVP was a success.
Analyse the Results
After the pre-defined experiment end date, it is time to make up the score. This is where you check your OMMs against your Definitions of Success to see if your assumptions were validated.
Using the previous example:
A2: We believe that customers will pay us to solve [x] with [y].
D2: We assume A2 to be validated, if [n]% of customers choose the paid tier.
we simply need to check if:
average conversion rate (%) > n (%).
Don't be afraid to use simple graphs and tables in your analysis. They will help you recognise patterns and outliers.
Results will not always be straightforward. For instance: You might find out that the conversion rate was low in the first month but 5x-ed in the second month, possibly making the average less reliable.
Evaluate your Options
Your analysis should be opinion-less. It is an objective representation of the results. If analysis is about representation, evaluation is about interpretation. This is where we draw our conclusions from the test results.
In our last example we found a big difference between the conversion rates in the first and second month. The result? The OMM didn't meet our definition of success. This might prompt us to look into what may have caused the disparity.
Maybe we find that because our first month was January. Being right after the Christmas holidays, people didn't want to spend as much. Once February came around and people started spending again, our conversion rate lifted.
That insight might lead us to omit the first month from the results, or extend the validation run by another month. Either way, it helps us make a more informed decision.
Decide
After sifting through all the data, creating every possible graph and entertaining any possible narrative, it is time to make a decision. Using your evaluation as a basis, your MVP suffers one of three possible outcomes: 1. Launch, 2. Pivot, or 3. Kill.
Launch 🚀
…if the data clearly validates your critical assumptions on all levels. The market is ready for your product. You can launch a v1.0 with confidence.
Pivot ↩️
…if the data is ambiguous, contains weird outliers, or patterns you cannot explain. Pivoting means changing something in your MVP or the experiment setup to see if that changes the outcome.
Successfully validating one assumption, but invalidating another, can also be a trigger to pivot. It means your product is interesting on some levels, but needs tweaking to really hit the sweet spot.
Kill 🔪
…the idea if the data clearly shows your crucial assumptions were invalidated. Killing a startup idea is always hard, which is why founders tend to pivot more than they should.
It helps to have multiple new ideas, or even MVPs lined up. I think single-startup product funnels are the future. Diversify, then verify. NoCode brings us one step closer.
Thanks for reading! 🙏🏻
Want to get the tools, hacks & smarts to build startups the right way?
Subscribe to get my newsletter straight in your inbox.
If you feel the need to reciprocate, feel free to share, leave a like, or buy me a coffee ❤️.
Hello. Thanks for sharing your knowledge. Really appreciated. I am desperately looking for a typeform that you provided in one of your articles. I thought it was this one but no. The typeform assess how the testers loved or not the mvp with simple questions like 'Would you buy it?' and so on. Can you give me the url please? Thanks.