'Continuous Discovery Habits' by Teresa Torres
'Continuous Discovery Habits' by Teresa Torres

An excellent read for all product managers and founders that teaches you the proper way to be constantly talking with your customers and doing product discovery. At this point, most people building products know the importance of getting input from customers—of validating product ideas, doing user research, doing user testing, and so on—but not how to do it effectively, and that’s precisely what this book teaches you. It’s short and to the point, with no wasted pages or business speak.

Here are some of the key insights for me:

Defining continuous discovery

A key idea in this book is that product discovery is not something you do once, just to launch the product, but something you do continuously. The book defines continuous discovery as follows:

  • The team building the product…
  • Has touchpoints with customers at least once per week…
  • Where they conduct small research activities…
  • In pursuit of a desired outcome.

Since product teams make decisions every single day, the idea of continuous discovery is to infuse those daily decisions with customer input.

The structure of discovery

The process for doing discovery is:

  1. Define a clear business outcome. What business need are you trying to achieve?
  2. Discover and map out the opportunity space. Here, you explore customer needs: the pain points and desires—the opportunities—that, if addressed, could drive your desired business outcome.
  3. Discover solutions to address those opportunities. Come up with solutions to achieve the desired outcome.

These steps should be visualized in an opportunity solution tree (OST) (you can find an example image here). The root (top) of the tree is the business outcome you want as per item (1). This branches out into a series of opportunities and sub-opportunities you discover in (2). A key insight is that you only focus on the customer needs in (2) that could help you achieve your business needs in (1): this is how you ensure that would you build achieves both business and customer needs!

You will then pick a small subset of the most promising opportunities to focus on, branching out just these opportunities into possible solutions for them in (3). Finally, for each solution, you’ll further branch those out into a series of assumption tests that you can use to figure out which of the solutions is most likely to create the business & customer value you want.

Instead of “whether or not” decisions, use a “compare and contrast” mindset

One of the most common mistakes product teams make is to get caught up in “whether or not” decisions: e.g., “should we stop everything to fix this problem?” or “should we stop everything to build this feature?” This is a trap that makes you myopic and leads to poor decision making, as you’re essentially asking, “is this valuable” whereas what you should really be asking is, “is this the most valuable thing we could do?”

Instead of framing decisions as “whether or not” decisions, you should shift to a “compare and contrast” mindset. Instead of “should we solve this customer need?” you should ask “which of these customer needs is most important for us to address right now?” Instead of jumping at the first idea you have, ask “how else might we address this opportunity?”

Visualizing your options using an OST helps you avoid the “whether or not” trap. By its very nature, the OST is a branching tree which encourages you to flush out a variety of opportunities (branches) and a variety of possible solutions (sub-branches). This allows you to use your limited time and energy on the most valuable opportunities and solutions available, rather than the first ones that come to mind.

You also want to use the “compare and contrast” mindset when ranking opportunities: for example, instead of going through each opportunity and asking, “how many customers does this affect?”, which would require a huge amount of data gathering, look at all your opportunities, and ask, “which of these opportunities affects the most customers?” It’s usually far easier to rank opportunities against each other than it is to evaluate them in isolation.

Focus on product outcomes when managing by outcomes

Many businesses these days try to manage by outcome, using system such as OKRs to set objectives for teams to achieve, and letting those teams figure out how to achieve those outcomes. This is generally a good thing, but only if you pick the right types of outcomes to focus on!

There are three general types of outcomes:

  • Business outcomes: track business progress. E.g., revenue, retention, stock price.
  • Product outcomes: track how the product drives business value. E.g., percent of satisfied customers.
  • Traction outcomes: track usage of specific features. E.g., Usage of a certain part of the website.

The recommendation: in most cases, you should manage by product outcomes.

Although you certainly want to track business outcomes, they are not effective tools for managing by outcomes. That’s because (a) they are lagging indicators, so they are too slow to use in a product team’s iterative feedback loop and (b) they aren’t something the product team can influence directly—e.g., you can’t force a customer to buy or the stock price to go up!

Similarly, it’s useful to track traction outcomes, but you don’t usually want to use them to manage by outcome. That’s because traction metrics make an assumption that one specific feature is what really matters, but it may turn out that customers don’t care about that feature, or that feature isn’t tied to their overall success. If you assign a product team a traction metric as the outcome to achieve, then their hands are tied: they end up obsessing over a specific feature that may ultimately have no impact on the customer or business outcomes we care about. There are some exceptions where traction metrics are useful: e.g., for a junior product manager, improving a traction metric can be a good way to learn and ramp up; also, for a highly mature, proven product, where you know with very high confidence that the traction metric is tied to customer outcomes, focusing on that metric can be worthwhile.

For the majority of product teams, you are better off focusing on product outcomes. These tend to be leading indicators and they are outcomes the product team has some direct control over. Moreover, there is enough flexibility across the product where the team can explore and find the right things to focus on to affect those product metrics, rather than being tied to any one specific feature as with traction metrics.

Start with learning goals, then move on to performance (SMART) goals

At a high level, there are two types of goals you can set:

  1. Performance goals (SMART): one option is to set performance goals, which should be specific, measurable, achievable, relevant, and time-bound (SMART). Example: increase page views by 10% by the end of Q2.
  2. Learning goals: another option is to set learning goals, where you are trying to discover an approach or strategy that might work. These tend to be more open ended. Example: find opportunities that may increase engagement.

The research suggests that, when faced with a new outcome, and one that is complex, most teams perform better by setting learning goals first, and only later, setting performance (SMART) goals. That is, give your team some time to do discovery work initially (e.g., figure out opportunities to increase engagement), before picking a specific performance metric to improve (e.g., increase page views by 10%). Without that initial discovery work, you’ll struggle to know what performance metric is worth improving (e.g., is it page views or time on site or DAUs), and the team will struggle to know how to improve that metric, leading to worse outcomes all around.

Ask customers about past behavior, not future predictions

When doing discovery, you will spend a lot of time interviewing customers. If you do it the wrong way—ask the wrong questions—you’ll get information that is very misleading. In particular, if you ask questions where someone has to predict how they might behave in the future or to explain their preferences, this often leads to people thinking about their “ideal” selves and making up answers that are not reliable: e.g., you ask someone what they would pick on a menu, and they say salad, but when you observe what they actually pick, they go for the burger; or you ask someone what criteria they use to pick out jeans, and they say it’s all about fit, but when you observe their actual behavior, they always buy jeans online, where you can’t check fit at all, and so the real criteria is all about convenience, selection, and price.

The solution: ask customers about what the actually did in the past. E.g., Ask “what did you pick on the menu last time you were at that restaurant?” or “tell me about the last time you bought jeans.” This lets you learn from actual behavior, rather than perceived or imagined behavior.

Note that even when asking about past behavior, customers may jump to generalizations: e.g., “I usually solve this by…” or “In general, what I do is…” These are predictions or interpretations and likely not to be reliable! Gently guide them back to specific past behavior: “OK, but in this specific instance, what exactly did you do?”

Vary the scope of your questions

You might ask customers a question specifically about the product you’re building: “tell me about your experience last time you used our product XXX to watch movies.” This will reveal pain points with your product. However, you may want to broaden the scope: “tell me about the last time you watched movies.” This will tell you about your direct competitors. Or you could go even broader: “tell me about the last time you did something for entertainment.” This tells you about the market category you’re in.

“You’ll want to tailor the scope of the question based on what you need to learn at that moment in time. A narrow scope will help you optimize your existing product. Broader questions will help you uncover new opportunities. The broadest questions might help you uncover new markets.”

Create experience maps

A key part of developing a product is understanding the full customer experience. This includes your product, but also everything happening with the customer around your product. To avoid missing this critical context, you should draw an experience map:

  1. Define the scope. This depends on the product problem you’re trying to solve. If you’re developing a totally new product, you’ll want the full experience around it; if you’re working on a single new feature, you might zoom in more. Example: if you’re building a brand new video streaming app, the scope might be, “how do customers entertain themselves with video?”

  2. Draw the customer’s experience, not your product. Don’t diagram your product, screen by screen. Instead, draw the process as the customer perceives it. Example: with the video streaming app, the experience might start with the customer finishing dinner, and looking for a way to relax at night; after that, they might choose to put on the TV; then, they might find your app. Even at this point, don’t draw your product screens, but focus on what the customer is trying to do: e.g., how do they choose what to watch? Where do they hear about new content? Who are they watching with? What issues do they hit along the way? And so on.

  3. No artistic skill is required. This isn’t an art project. Use stick figures, boxes, and arrows.

  4. Update the map based on customer interviews. As you talk to customers, you’ll want to ask them about the full experience, and to update your experience map based on what you learned. You may have to do some work to “execavate” the full story: ask them to “Start at the beginning—what happened first?” Or say, “Where were you? Set the scene for me.” Then prompt them to go further, with “What happened next?” or to fill in gaps with “Wait, what happened right before that?” Find out who else was involved with “Who was with you?” and uncover problems with “What challenges did you hit?” and “How did you solve those?”

Discover opportunities from interviews

To fill out your OST, listen for opportunities during customer interviews. These are needs or pain points.

A few key points:

  1. Record opportunities as problems and not solutions. Customers often express a specific solution they want, and it’s your job to dig in, and identify the underlying problem. For example, a customer might say, “I wish I had a way to search by voice.” This is actually a solution (a feature request)! Dig in and ask, “What would that do for you?” The response might be “I don’t have to spend tons of time typing out movie titles.” Ah, now you understand the underlying problem! Voice search is one way to solve it, but there are many other options worth exploring too, and it’s your job to figure those out. A good way to detect opportunities that are solutions in disguise is to ask, “is there more than one way to solve this?” If there’s only one solution, then this isn’t an opportunity, but that very solution!

  2. Record opportunities from the customer’s perspective, not your company’s. No customer would ever say, “I wish I had more streaming-entertainment subscriptions.” But they might say, “I want access to more compelling content.” Always record opportunities from the customer’s perspective: sanity check by asking, “would a real customer have said this, or are we just wishing someone would say this?”

  3. Break big opportunities down into smaller ones. You’ll sometimes hear opportunities from customers that, at first, seem very difficult to solve: e.g., “Is this show any good?” In these cases, you’ll want to break the large opportunity down into smaller sub-opportunities (adding them as child nodes in the OST): e.g., the sub-opportunities may be “Who is in this show?”, “Are my friends watching this show?”, “Is this a genre of show that I like?”, and so on. Usually, these sub-opportunities (a) will feel a lot more solvable and (b) give you the ability to deliver value over time, rather than trying to boil the whole ocean at once.

Flushing out assumptions

Go through your story map and:

  1. Each time you see a step where you believe a user will do something, this is an assumption! Make these explicit across 3 dimensions: (a) desirability assumptions, where you assume the user wants to do what you’re asking, (b) usability assumptions, where you assume the user understands what they need to do and can figure out how to do it, and (c) feasibility assumptions, where you assume you can build what is required for each step of the map. For example, if a step in your map has a user coming to your product to watch sports, you are making (a) the desirability assumptions that users want to watch sports, and to watch them using your product, (b) usability assumptions that users can figure out how to watch sports in your product, and (c) feasibility assumptions that you’re able to get sports content into your product.

  2. Conduct a pre-mortem. At the start of a project, imagine it is six months in the future, your product or initiative launched, but it was a failure. What went wrong?

Identifying your leap of faith assumptions

Rank assumptions on a 2d chart with two axes:

  • X-axis: evidence. The left side is assumptions for which you have strong evidence and the right side is assumptions for which you have weak evidence.
  • Y-axis: importance. On top are assumptions which are more important for your product to succeed and on the bottom are assumptions which are less important for your product to succeed.

Remember that you are placing assumptions relative to each other, so the exact spot on the 2d chart doesn’t matter; all that matters is the location on the chart relative to other assumptions.

The assumptions that end up in the top right quadrant—the ones that are important, but for which you have weak evidence—are the “leap of faith” assumptions you should focus on!

Testing assumptions with simulation tests

You want to create assumption tests that help you move assumptions from “weak evidence” to “strong evidence.” The best product teams can do 15-20 such tests per week! How? By testing just the assumption in a simulation test, rather than testing an entire idea.

Here’s how:

  1. Identify the right spot in the experience map. At what moment in time does this assumption come into play? E.g., If we are testing the assumption that a user will watch sports on our platform, the moment in time might be when they sit on their couch and turn the TV on.
  2. Define a hypothesis. If the assumption is true, what do we expect the user to do? E.g., If we are testing the assumption that a user will watch sports on our platform, the hypothesis is that at least X% of users will open our product after sitting down on the couch.
  3. Run a simulation test. Create a minimal simulation of solely this exact part of the experience. This might be as simple as a one-question survey: e.g., “Please select all the sports you’ve watched in the last month” or “When was the list time you watched a sporting event?”.

Most of the learnings will come from failed tests: where users do not behave as you hypothesized. These simultation tests allow you to find these problems quickly—to “fail fast.”

Rating: 5 stars