Thursday March 22, 2018
The journey from product concept to business launch is risky, messy, and often unsuccessful. No matter how you slice it, risk is an inherent part of product development. And the more leading-edge the technology, the riskier it’s likely to be.
“But I have a really great idea!”
You may think you have a great idea, but the odds are against you:
72% of all new products end up failing. — Simon-Kucher & Partners, 2014
42% of failed startups were unsuccessful because their product or service didn’t address a real market need. — CB Insights, 2017
Given these staggering statistics, anything you can do to reduce risk — and in particular, to ensure you’re addressing a known market demand — is extremely valuable.
Fortunately, there are tools and approaches you can use to reduce the likelihood that your product will fail. By introducing hypothesis-driven validation into your process, you can ride the uncertainty inherent in your work toward much more favourable odds.
Hypothesis-driven validation can be broken down into seven steps:
- Identify your assumptions
- Reframe assumptions as “hypotheses”
- Rank them in order of importance
- Design appropriate tests
- Conduct the tests
- Synthesize your learnings
Hypothesis-driven validation sounds commonsensical enough. Yet most businesses tend to avoid these activities, preferring instead to minimize risk through planning, analysis, financial modelling, and the like. These activities can provide all sorts of structure and confidence, yet ultimately they don’t compare to the nitty-gritty of testing assumptions, collecting evidence, and direct observation, which allows for emergent findings. When it comes to uncovering opportunities (not just mitigating risk in the narrow sense), testing beats planning every time.
Let’s walk through these one by one.
1. Identify Your Assumptions
Every product concept is built on a pile of assumptions. Before doing anything else, try to list as many assumptions about your product as you possibly can, breaking them down into the simplest statements possible.
Since many of your assumptions are interconnected and hard to eke out, it pays to consider your product in terms of feasibility, desirability, and viability. These lenses will help you think about all the things you may be taking for granted.
Feasibility: Can the product be built with the current technologies? Can all of its features be built? Does it sync up with existing platforms out in the market in the way you want it to? Have you chosen the right technologies?
Desirability: Is your product solving a customer need? How is your product helping the user? Will people even want it? Why will they want it?
Viability: Will producing your product be time-, cost-, and resource-effective? Does it fit into your company vision? How about your business model? Are there similar products in the market? What are people willing to pay for your product, and how does that match up to your profit goals?
2. Reframe Assumptions as Hypotheses
Once you have a long list of assumptions, reframe them as “We believe that…” statements, or clear hypotheses. This helps expose them as subjective opinions still in need of proof rather than objective facts. Say, for example, that you were developing an app that automatically sends users a pair of trousers every month:
Your Assumption: Customers will be happy with the trousers they are mailed through the app.
Reframed as a Hypothesis: We believe that customers will be happy with the trousers they are mailed through the app.
3. Rank Hypotheses in Order of Priority
To determine which hypotheses merit the most attention for testing, consider how significant it would be if that hypothesis were proven false. Would it be a tiny hiccup, or would it mean the product couldn’t be made at all? Could the issue be solved with a small pivot, or would it spell ruin for your potential business?
Get your team together and write out all your hypotheses out on Post-its. Throw them up on a wall and vote on what would be most threatening to the product’s success if the assumption turned out to be false.
4. Design the Tests
Paying greatest attention to the high-risk hypotheses, consider what the most applicable test methods might be for each one. Here are some sample test types:
Quantitative: Surveys (these can be as informal as a Google Form or Twitter poll), data analysis (of existing products, industries, etc.), A/B testing of prototyped experiences.
Qualitative: Wireframes, proofs of concept, and other prototypes for user feedback; in person or remote user/stakeholder interviews; ethnographic research; experience prototyping; roleplaying.
In a table, match each hypothesis (listed in order of importance) with its appropriate test method. You may use more than one method per hypothesis.
Now, build prototypes, write survey questions, prepare interview questions — do whatever you need to do to effectively test your hypotheses.
When it comes to uncovering opportunities, testing beats planning every time.
Following the test plan you’ve outlined, commence testing. Depending on the chosen test methods, here’s how that might play out:
Surveys: Send your survey link to participants in your user group.
Interviews: Reserve a space, print your interview questions, arrange for a note-taker.
Representative Designs: Build wireframes and/or prototypes, reserve a space to test, print test plans.
Start by testing the riskiest hypotheses and refrain from moving on to solutioning or making product modifications until you have gone through them all.
6. Synthesize your Learnings
Once you’ve conducted all of your tests, making sure to capture all your data along the way, debrief with your team to go through the data, synthesize it, and capture learnings.
Some of your hypotheses will have been straight up invalidated while others will be proven true. But more often than not, the reality is murkier: your hypothesis might have been true, but not for the reason you thought, or your hypothesis wasn’t true, but in showing it to be false exposed a deeper truth that hadn’t been articulated.
The “synthesis” step can be messy, but it pays to take time and pour over your results carefully.
With your learnings in hand, you can finally return to your product concept and see what refinements, revisions, iterations, and research might be necessary to reduce risk.
And now, as with many agile processes, refactor, rinse, and repeat to further refine your direction.
We’ve had a lot of success with Hypothesis-Driven Validation at Connected, allowing small teams to accomplish a great deal on short timelines because of the rigour this process instills in identifying assumptions, prioritizing hypotheses, and validating/invalidating those hypotheses through testing. The process helps not only narrow down and refine a concept but actually, generate new insights about users more broadly. I hope you experience success with it too.
Thanks for reading! (I assume you did. Get it?)
This post was written in collaboration with Connected Product Manager Nate Archer.