What Customers Want · Anthony Ulwick · 2005
Companies often fail in innovation because they spend years and millions of dollars creating technology-driven offerings that never truly resonate with customers. Traditional R&D approaches rely on guesswork and incremental improvements, leading to extended development cycles and failure rates that hover around 90%. Such trial-and-error methods not only drain resources but also fail to uncover what truly matters to customers. Many organisations have tried to solve this by embracing the “customer-driven” movement, hoping that talking more with customers, conducting interviews, focus groups, and performing conjoint analysis would yield the breakthrough insights needed. Yet despite all these efforts, results remain unsatisfactory. Customer requirements often derail these approaches because customers, when asked directly, rarely articulate their needs in a way that leads to successful products. Conventional voice-of-customer data—lists of features, vague benefits, and personal opinions—seldom translate cleanly into new offerings that create real value.
A radically different perspective has emerged: an outcome-driven approach. This method starts by recognising that customers buy products and services to get specific jobs done. People measure success not by vague attributes or trendy features but by how well their chosen solution improves their ability to accomplish a task. At the core are “desired outcomes,” the metrics customers use to judge if their job is being executed successfully. These outcomes are stable criteria that define what “better” really means. Identifying 50-150 such outcomes for a given job may sound complex, but doing so unlocks the ability to systematically prioritize opportunities. By ranking which outcomes are most important and least satisfied, it is possible to pinpoint exactly where a market is ripe for improvement and where innovation can deliver meaningful gains.
In this new approach, innovation strategy is formulated by first understanding what jobs customers are trying to get done. Once these jobs are clear, the goal becomes to discover which outcomes matter and how well current solutions fulfilll them. Gathering customer inputs shifts from asking, “What features do you want?” to asking, “How do you measure success when performing this job?” and “Where are you struggling?” By translating these answers into quantitative importance and satisfaction ratings, companies can apply a predictable formula to reveal opportunities. The algorithm combining importance and satisfaction data highlights which outcomes remain underserved. Rather than relying on gut feel, intuition, or guesswork, the entire innovation effort becomes grounded in data that aligns with customers’ actual measures of value.
The process does not end at identifying unmet needs. Once these outcomes are known and prioritised, companies can segment their markets in a completely new way. Traditional segmentation might rely on demographics, behaviours, risk profiles, or other arbitrary factors that do little to reveal why one group of customers is underserved. By focusing on outcomes instead, it’s possible to segment customers based on how they rate the importance and satisfaction of each outcome. This outcome-based segmentation reveals pockets of customers who share a common set of unmet needs. These customers may otherwise appear diverse, but they align around a core struggle or a series of outcomes they can’t achieve to their desired level. With this insight, companies discover untapped segments that value improvements no one else has yet delivered.
Identifying these segments and underserved outcomes leads naturally to better targeting strategies. Instead of guessing which improvements to pursue or simply following existing competencies and resource allocations, companies make informed decisions. If a small number of outcomes are glaringly underserved and affect a large portion of the market, that becomes the logical entry point for new development. If, on the other hand, segments emerge that need a niche set of outcomes addressed, a more tailored solution may be introduced at a premium price. One might also find over-served outcomes—areas where products deliver far more than needed, inflating cost without adding value—allowing companies to strip away unnecessary features, reduce complexity, and improve margin. By using this systematic, data-driven targeting approach, growth options multiply. Companies can choose to improve existing offerings, create solutions for entirely new markets, refine internal processes for efficiency gains, or even explore disruptive moves that leverage emerging technologies to attack over-served customers or uncover new markets altogether.
With a clear set of priorities and a segmented understanding of the market, messaging and branding also improve. Traditional marketing fails when it relies on imprecise language that touts vague benefits like “faster” or “easier,” leaving customers to guess how a product might truly help them. By anchoring messaging in actual underserved outcomes, communication can highlight a product’s strengths in a way customers immediately understand. Instead of “This tool is easy to use,” a company can say, “This feature reduces the time it takes to achieve X outcome by 40%.” This precision eliminates confusion and ensures customers see the direct link between a product’s benefits and their own success criteria.
Some industries emphasise emotional branding over function, believing that tapping into customers’ feelings or identities is key. While emotional appeal can be effective in markets with low functional complexity, it is risky and often misguided in environments where customers care deeply about functional performance. High-function products and services must meet the functional outcomes first. Only when core outcomes are nearly fully met does it make sense to add an emotional layer to differentiate. Attempting to build brand loyalty on emotional messaging alone, without first ensuring the product performs its intended jobs reliably, is a common cause of marketing missteps.
The sales force also benefits from an outcome-driven approach. Equipped with detailed knowledge of which outcomes matter most to each market segment, sales teams can quickly identify a prospect’s primary struggles and tailor their pitch to highlight relevant features. This reduces guesswork, shortens sales cycles, and increases the chance of closing deals, as sales personnel can pinpoint how their solution targets specific underserved outcomes known to matter.
After aligning strategy, customer research, segmentation, targeting, and messaging, attention turns to the development pipeline. Many companies suffer from having too many projects and no objective way to rank them. Without a clear tie to underserved outcomes, projects persist because of internal politics, sunk-cost biases, or subjective hunches. The outcome-driven approach provides a method for evaluating each initiative by how well it addresses prioritised opportunities. Projects that promise to improve customer satisfaction along important but underserved outcomes rise to the top. Those that fail to move the needle are culled, freeing resources to invest in more promising ideas. This data-driven prioritisation avoids spreading teams too thin and ensures a lean, focused pipeline that consistently produces offerings customers will value.
Bringing it all together, the final piece is generating and evaluating breakthrough concepts. Traditional brainstorming often results in a flood of random ideas that do not necessarily solve any significant problem. Unbounded ideation sessions frequently measure success by the quantity of ideas rather than their quality. This is wasteful. With an outcome-driven model, brainstorming becomes “focused brainstorming.” Teams now have a clear target: the top underserved outcomes. Their goal is to find a small set of solutions that dramatically improve satisfaction in these areas. Constraints can be applied—such as cost, feasibility, or sustainability—to direct creativity toward practical solutions. Bad ideas are dismissed quickly, and the best ideas are refined until they deliver truly meaningful improvements.
To evaluate these concepts, the “customer scorecard” is introduced. This scorecard, based on the outcomes and their opportunity scores, allows team members (not customers) to objectively judge how well each proposed concept will satisfy every targeted outcome. No longer are people guessing at value; they can quantify it. For example, if a concept significantly improves performance on several high-importance, low-satisfaction outcomes, the scorecard reveals a large jump in overall value. Concepts that fail to deliver measurable gains are abandoned before costly development. Concepts that excel can be confidently advanced into the pipeline, armed with hard evidence of their potential impact. This transforms idea evaluation from guesswork into a predictive, quantifiable exercise.
R&D also becomes more strategic. Instead of toiling on technology projects without knowing if customers will care, R&D teams receive explicit targets. They know which outcomes they must improve and can choose the best technological paths to achieve that. If no existing technology can solve the identified problem, R&D can focus efforts on developing or acquiring the right solutions. This prevents wasting resources on science projects that solve no pressing customer problem and ensures a tight coupling between technical exploration and market needs.
Overall, the outcome-driven approach creates a chain of value from strategy to execution. It starts by understanding that customers judge products by how well they help get jobs done. By defining a set of outcomes and quantifying their importance and satisfaction, companies gain a reliable compass for innovation. Using that compass, it becomes straightforward to identify opportunities, segment the market in meaningful ways, decide where to invest, craft clear messaging, allocate development resources efficiently, and confidently generate breakthrough concepts. This integrated system replaces chaos and guesswork with a methodical, fact-based process that can dramatically increase the odds of innovation success.
The ultimate promise of the outcome-driven approach is to transform innovation from an unpredictable gamble into a disciplined business process. Instead of betting millions on products that may fail, organisations now have the tools to understand customer demands at a granular level, build offerings that align with those demands, and measure success before a product ever reaches the market. By systematically meeting the right outcomes, companies deliver the products and services customers are eager to buy, leading to consistent growth, reduced time-to-market, and far fewer costly failures.
Quick Links
iPhone Screen Real Estate is Growing at 20% P.A. · Image
AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges · Paper
10 Pricing and Packaging Optimisations · Article
Make Sure You Know Why Your MVP Didn't Work · Article
Product Design Process in the Era of AI · Image
A model for types and levels of human interaction with automation
Parasuraman, Sheridan, and Wickens. 2000. (View Paper → )
…Developments in hardware and software now make it possible to introduce automation into virtually all aspects of human-machine systems. Given these technical capabilities, which system functions should be automated and to what extent? We outline a model for types and levels of automation that provides a framework and an objective basis for making such choices. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator.
10 Levels of Automation
The human does everything manually—no computer assistance.
The computer offers a full set of decision alternatives, but the human makes the final choice.
The computer narrows down the options to a few possibilities for the human to choose from.
The computer recommends a single decision alternative while the human retains the authority to accept or select a different option.
The computer executes the recommended alternative if the human does not intervene.
The computer initiates execution but allows only a limited time window during which the human can veto the action.
The computer makes the decision and automatically acts if the human does not respond within the short time frame.
The computer fully executes the decision and then informs the human after the fact.
The computer conducts both decision making and action implementation with minimal or no human involvement.
Full automation—the system operates entirely autonomously, with the human out of the loop.
4-Stage Model of Human Information Processing
This model describes the sequential stages through which a human operator processes information in an automated system:
Information Acquisition: Sensing and registering incoming data.
Information Analysis: Processing, integrating, and making sense of the data.
Decision and Action Selection: Evaluating alternatives and selecting the appropriate response.
Action Implementation: Executing the chosen decision.
This paper provided a clear, systematic framework for understanding human interaction with automation. It emphasised that automation isn’t simply an all-or-nothing switch but exists along a continuum—both in terms of the tasks that can be automated (acquisition, analysis, decision, and action) and the levels of automation applied. This perspective helped shift the focus toward designing systems that balance the benefits of automated decision making with the risks of reduced human involvement, such as loss of situational awareness, complacency, and skill degradation.
Book Highlights
When designing any campaign, it helps to consider these seven questions: Discovery: Is there an underlying problem worth solving? Acquisition: Are enough people interested/impacted? Activation: Does it deliver value? Retention: Do people come back? Revenue: What’s the impact (on revenue or some other meaningful metric)? Referral: Do people tell others? Traction: Did traction go up?
Ash Maurya · Running Lean
The only real way to break down the people silos that prevent cross-team collaboration is to unify the data completely, and start to build incentives for every stakeholder to enrich the core people data asset.
Kihn & O’Hara · Customer Data Platforms
The best way to deal with strategic issues is by squarely facing the challenge. Too many people start with goals and other visions of a desired end state. Start with the challenge, and diagnose its structure and the forces at work. Once you do that, your sense of purpose and the actions you consider will change. In that diagnosis, find the crux. That is the most critical part of the challenge that you can actually expect to solve. Don’t pick a challenge you cannot yet deal with—attack the crux of the situation, build momentum, and then reexamine your position and its possibilities.
Richard Rumelt · The Crux
In the first 15 seconds of every new experience, people are lazy, vain, and selfish. It sounds harsh. But I didn’t say it. Scott Belsky, Chief Product Officer at Adobe, did.
Wes Bush · Product-Led Onboarding
Outsized returns often come from betting against conventional wisdom, and conventional wisdom is usually right. Given a 10% chance of a 100 times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of ten.
Jeff Bezos
If somebody watched you for a week, would they believe you about your goals? Jay Yang