Summary of Continuous Discovery Habits by Teresa Torres
The problem
Sometimes we deliver a feature and it doesn’t have the impact that we’d hoped for. One central challenge of our product work is reducing the likelihood of being in this situation.
As our focus is on delivering value, not just on delivering features, we need to be rigorous in pursuing only the ideas that are most likely to successfully drive our desired outcomes.
The solution
Adopt a ‘continuous discovery’ approach to product work. Continuous discovery is:
-
At a minimum, weekly touchpoints with customers
-
By the team building the product
-
Where they conduct small research activities
-
In pursuit of a desired outcome
Teams using this methodology can test 10-20 new ideas or approaches a week. This is only possible if, instead of testing whole ideas, we test the underlying assumptions that need to be true for the ideas to succeed.
The process of continuous discovery
-
Define an outcome
-
Map the opportunity space (Map the customer’s current experience; interview customers to understand their experiences by asking for specific stories, not their general ideas; make an easy-to-share-and-reference snapshot after each interview; automate the interview recruitment process so you always have a customer to talk to each week; map out the opportunity space by using the steps of the experience map or using interview summaries to identify key moments in time (nesting sub-opportunities under their parent opportunities))
-
Compare and contrast the parent-level opportunities (e.g. on opportunity sizing, market factors, company factors, customer factors); take the top one and do the same exercise on its children until you reach a target opportunity with no children. This is your target opportunity. (“Instead of asking “Should we solve this customer need?” ask, “Which of these customer needs is most important for us to address right now?””)
- Generate ideas for the opportunity
You want 15-20 ideas in total.
- Generate ideas individually, then share, then generate more individual ideas.
- “research shows that your first ideas are rarely your best ideas. The goal is to push your creative output to find the more diverse and more original ideas”
-
Evaluate the ideas together and make sure that they map up to the target opportunity.
-
Dot vote to pick the best 3 ideas. We’ll use prototyping and assumption testing to go from 3 ideas to 1 idea. We want to set up a compare-and-contrast decision. “Which of these three ideas best delivers on our target opportunity?” This helps us get past falling in love with our first ideas - and makes sure that we’re generating lots of ideas, the latter of which tend to be the best. Don’t test the ideas against each other directly - that would be loads of work. Instead, identify and test the most important assumptions that underlie the ideas.
- For each of the 3 ideas, to work out the key underlying assumptions, generate lots of assumptions - we won’t need to test them all. But generating lots makes it more likely that we uncover the riskiest ones.
Types of assumptions:
- Desirability: does anyone want it? Will people get value from it?
- Viability: will it work for our business?
- Feasibility: can we build it? E.g. is it technically viable?
- Usability: are people able to use it?
- Ethical: is there potential harm in building this? How to identify assumptions:
- Story map your idea. Show explicitly the steps that the user (and anyone else involved) must go through to get value. Review each step for the 5 assumption types.
- Pre-mortems. “Imagine it’s six months in the future: the product launched and it was a complete failure. What went wrong?”. It’s crucial to frame this as certain failure - i.e. it did fail - otherwise we don’t generate good ideas.
- Walk the lines of your opportunity solution tree, from your idea up to your opportunity and up to your outcome. Ask:
-
- Why will the idea address the opportunity? There will be assumptions that you’re making here.
-
- Why will the opportunity address the outcome?
-
Decide which assumptions to test: Map your assumptions relative to each other, from least important to most important, and from weak evidence to strong evidence. Test the most important assumptions for which you have the weakest evidence. Important: “We aren’t testing one idea at a time. We are testing assumptions from a set of ideas”
- Test the key assumptions: We want to get data on people’s actual behaviour, to generate evidence on a key assumption. (We’re seeking to mitigate risk, not establish absolute truth) So we should simulate an experience, and give the participant an opportunity to either behave in the way our assumption expects - or not. Ideas will often share underlying assumptions. So often a single assumption test can help us test multiple ideas.
Methods for testing assumptions:
- Unmoderated user testing (e.g. provide a stimulus, then provide tasks to complete and questions to answer. Do this asynchronously. “These types of [unmoderated user testing] tools are game changers. Instead of having to recruit 10 participants and run the sessions yourself, you can post your task, go home for the night, and come back the next day to a set of videos ready for you to watch.”
- One-question surveys - e.g. if we wanted to test “Our subscribers want to watch comedies” we could create a one-question survey asking “When was the last time you watched a comedy?”. As with everything else, make sure to ask about specific instances of past behaviour, rather than generalisations, or future-facing predictions.
- Using your own data
-
Update your mapping of risky assumptions, and revisit the ones that are in the ‘high importance, low knowledge’ quadrant (assuming they are above your organisation’s risk appetite). If you have the same things at the top, even after a smaller-scale test, then you could justify doing a more expensive test, e.g. a smokescreen test in production. “With assumption testing, most of our learning comes from failed tests. That’s when we learn that something we thought was true might not be. Small tests give us a chance to fail sooner.” Design for the best case scenario. They will still often fail. If you fail in the best-case scenario, then your results are clear. If you test with a less-than-ideal audience, then someone will argue that you need to tweak the audience to test it properly.
-
Measure the influence that this idea has on your outcome. Setting up your infrastructure to measure your outcome might take some work, but it’s essential. e.g. measuring successful job applications is hard. “Just because the hire wasn’t happening on our platform didn’t mean it wasn’t valuable for us to measure it. We knew it was what would create value for students, our employers, and ultimately our own business. So, we chipped away at it. We weren’t afraid to measure hard things”
-
“Track the long-term connection between your product outcome and your business outcome.”
How to get started with continuous discovery
Start talking to customers every week. This is the foundation for everything else.
How to measure adoption of these habits
-
How long since you last did a customer interview? (We’d expect this to be 1 week or less)
-
How long since you last threw out an idea after an assumption test? (We’d expect this to be 1 week or less)