Testing with Humans · Ron Chernow · 2018
A healthy reminder that most Product Teams spend too much time building and shipping. Exploratory testing gives us a chance to validate our assumptions before we commit to building. This book only scratches the surface of exploratory testing, but it’s a good introduction, and I recommend it to people getting into product management.
Key Points
There are two types of experimentation in Product Management:
Optimisation: Making small changes to a live product. Results are measured and the better performing variant is preferred.
Exploratory: Validating key assumptions before building a solution.
You can validate assumptions faster through experimentation than by building and launching complete solutions. Experiments help you move fast, increase the odds of success and make the most of limited resources.
The high-level experiment process:
Identify your key risks and assumptions - in your business model, product or feature. There’s a number of frameworks you can use to tease out assumptions:
Business Model Canvas by Alex Osterwalder
Assumptions exercise from Talking to Humans
Lean canvas by Ash Maurya
Assumption Mapping by David Bland
Prioritise the riskiest assumptions - the hypothesis to test:
Use a 2x2 of Impact and uncertainty
Identify and sequence experiments:
You can ideate on how to test an assumption, just like you can ideate around features. Involve the team, generate as many ideas as you can - then pick the best
Incorporate the results into your decision making process:
Don’t forget this one → It’s amazing how many teams experiment and measure - but the results don’t inform the future direction.
Visualise your learnings from each experiment against each assumption - so you can see everything on a page.
There’s a correlation between how much effort an experiment is - and how believable the results are. From a paper prototype to a live product or business. You can plot them on a ‘truth curve’
The Five Traits of Good Experiments
They are structured and planned. Use a template.
They are focused. Test a core hypothesis, don’t try to do too much at once.
They are believable. You can trust what you’re learning.
They are flexible. Remain open to making small improvements as you go.
They are compact. You can get results quickly.
The Experiment Template
What hypotheses do we want to prove / disprove?
For each hypothesis, what quantifiable result indicates success? (Pass/Fail metrics)
Who are the target participants of this experiment?
How many participants do we need?
How are we going to get them?
How do we run the experiment?
How long does the experiment run for?
Are there other qualitative things to learn during this experiment?
Always be asking: How can we learn just as much with half the time and effort?
You can build a culture of experimentation in either a Top-Down or Bottom-Up way:
Target execs that realise that success rate of initiatives is too low
OR start a grass roots movement by experimenting where you can, and publicising the results
In the News
Nike invested billions of dollars into advertising that was less effective but easier to measure—a lesson for product managers everywhere· Article
A judge has ruled that Google maintained a monopoly in search. This isn’t surprising, but the implications could be profound for Alphabet. The last case of this magnitude was Microsoft in 2000, where the court ruled that Microsoft should be broken up. Microsoft skilfully avoided a forced breakup by making major concessions and agreeing to aggressive DOJ oversight. Will Alphabet be able to do the same? Article
Quick Links
4 common product discovery mistakes and how to avoid them · Article
A thread of different GTM frameworks · Tweet
How to figure out metrics ownership · Article
The ‘Documentation Tradeoff’ and agile alternatives · Article
Marty Cagan on the Product Model and Org Design · Article
Assumption Prioritisation Canvas - Identifying and testing key assumptions · Article
Marc Andreessen on the onion theory of risk — Can this be applied to product? · Video
A plea for the lost practice of information architecture · Article
Understanding the Sources of Information Systems Project Failure
John McManus and A. Trevor Wood-Harper · 2007
Despite such failures, huge sums continue to be invested in information systems project and written off, for example the cost of project failure across the European Union was 142 billion Euros in 2004. 'Whilst our understanding of the importance of project failure has increased, many of the underlying reasons for failure still remain an issue and a point of contention for practitioners and academics alike. This paper examines through case research some of issues and casual factors of information systems project failure.
Engaging stakeholders well can be the difference between success and failure. Provide frequent updates and solicit constant feedback from yours.
Management issues cause 65% of IT project failures.
Technical shortcomings account for 35% of project failures.
Effective communication is critical for project success.
Lack of stakeholder and risk management contributes to failures.
Insufficient management support can derail projects.
Flawed estimation methods are a common pitfall.
Poor software requirements lead to technical challenges.
Unsuitable development tools can hinder project success.
Proper planning is essential, including contingency measures.
Technical support and user documentation are crucial post-deployment.
Book Highlights
If you deploy OKRs for your product organization, the key is to focus your OKRs at the product team level
Marty Cagan · Inspired
It’s important to remember that the log data you collect is evidence, and you need to foster an objective view of what users do on the site. Translating it into ratings and opinions is a subjective process, which is something that has to be tweaked not only for each domain but also for each recommender algorithm
Kim Falk · Practical Recommender Systems
How much flexibility is there in the Design Sprint recipe? Can we adjust the recipe? JK: Yes…but don’t adjust the recipe until you’ve tried the original.
Jeff Gothelf and Josh Seiden (quoting Jake Knapp) · Lean UX
Trying to guess at what needs-based segments exist and which needs are unmet introduces risk and variability into the innovation process. This is why statistically valid quantitative research is an essential part of the ODI process.
Anthony W. Ulwick · Jobs to be Done
An affordance is a relationship between the properties of an object and the capabilities of the agent that determine just how the object could possibly be used
Don Norman · The Design of Everyday Things