Notes: The Lean Startup
Careful planning and execution work for general management but not for startups. Perfect execution is futile if you end up building something nobody wants (waste). The real progress for startups is not how many JIRA tickets we closed but how fast we gain validated learnings—what creates value for customers and their willingness to pay—while minimizing waste.
Systematically break down a business plan and test each part—define a clear hypothesis (predictions about what is supposed to happen) then A/B test to prove the predictions. Two leap-of-faith assumptions:
- Value hypothesis: do customers find value
- Growth hypothesis: how new customers discover the product
Test assumptions with MVP that targets early adopters, in the Build-Measure-Learn loop. Any work beyond what was required to learn is waste, no matter how important it seemed. If we do not know who the customer is, we do not know what quality is. Even a “low-quality” MVP can act in service of building a great high-quality product. Plan the Build-Measure-Learn loop in reverse order: decide what we need to learn, then figure out what we need to measure, then see what product we need to build to experiment. Launch MVP under a different brand name if you worry about branding risk. Commit to iteration: don’t give up hope due to bad news from MVPs, but experiment more, learn more, and maybe pivot.
Do not blindly copy successful startups’ techniques. Charging customers from day one works for Stripe but not Facebook. Low-quality early prototypes works for Facebook but not mission-critical industries. Always experiment to see what works in our unique circumstances.
Eventually a successful startup will compete with fast followers. A head start is rarely large enough to matter. The only way to win is to learn faster than everyone else.
Vanity metrics, such as gross number of customers, are not actionable. We cannot conclude whether metrics growth is due to
- latest product development, or
- driven by decisions the team had made and that current initiatives have no impact.
Use cohort-based metrics (e.g. among users who signed up in June, what percentage exhibits behaviors we want), and use A/B test to conclude causality. Measure team’s productivity in units of validated learning, not the production of new features.
Pivot: test a new fundamental hypothesis in business model, product road map, partnership, customer segments, engine of growth. The decision to pivot depends on data & intuition. Misguided decision to persevere is value destructive. Signs of time to pivot: the decreasing effectiveness of product experiments and the general feeling that product development should be more productive. Startup’s runway is the number of pivots it can still make. To extend runway, achieve the same amount of validated learning at lower cost in shorter time. Schedule in advance regular “pivot or persevere” meetings. In pivots, don’t throw out everything and start over. Repurpose what has been built and learned.
My personal thought:
The search for PMF is like gradiant descent, a combination of intuition and data. Gradiant descent is an optimization algorithm: start at a point (your best guess), then iteratively descent in the direction of “slope”. The process is almost mechanical, but you need to start with an intuitive guess. Pivot means to find a new starting point. You need to pivot when
1) the “slope” around the current point is flat in all directions. Nothing you do seems to improve the metrics, or
2) you are trapped in a local minimum (saturating early adoptors) and want to find a better local minimum (mainstream customers).
Once you have found success with early adopters, you want to sell to mainstream customers. Early adopter market is saturated quickly despite prior “up and to the right” results. Mainstream customers have different and more demanding requirements. This is a customer segment pivot. The actions we need to win mainstream customers is different from how we won early adopters. Pivot requires new MVP. Just as lean manufacturing uses just-in-time production to reduce in-process inventory, Lean Startups practice just-in-time scalability, conducting product experiments with small batch size. Imagine that the letters didn’t fit in the envelopes. With the large-batch approach, we wouldn’t find that out until nearly the end. With small batches, we’d know almost immediately. Smaller batch size (small diff in product code change) means shorter Build-Measure-Learn cycle and less WIP waste.
Sustainable growth: new customers come from the actions of past customers: word of mouth, product usage (wearing designer cloths), funded advertising, repeat purchase.
Sticky Engine of Growth: repeat usage; use customer retention rate to test growth hypothesis. Other metrics like activation rate and revenue per customer can test value hypothesis but has little impact on growth. If the rate of new customer acquisition exceeds the churn rate, the product will grow.
Viral Engine of Growth: focus on increasing the viral coefficient. Many viral products do not charge customers but advertisers, because viral products cannot afford to have any friction impede the process of signing customers up.
Paid Engine of Growth: advertising, outbound sales, foot traffic. Use LTV/CAC to test growth hypothesis. Over time, CAC is bid up by competition.
A startup can evaluate PMF by evaluating each Build-Measure-Learn iteration using innovation accounting. Every engine is tied to a set of customers and their habits, preferences, channels, and interconnections and thus eventually runs out of gas.
If the boss tends to split the difference, the best way to influence the boss is to take the most extreme position. Your opponants will do the same. Over time, everyone will take the most polarized positions. Don’t split the difference. Instead create a sandbox for innovation that will contain the impact but not methods of the new innovation. It works as follows:
- Any team can create a true split-test experiment that affects only the sandboxed parts of the product or service (for a multipart product) or only certain customer segments or territories (for a new product).
- One team must see the whole experiment through from end to end.
- No experiment can run longer than a specified amount of time.
- No experiment can affect more than a specified percentage of customers.
- Every experiment has to be evaluated on the basis of a single standard report of five to ten actionable metrics.
- Every team that works inside the sandbox use the same metrics to evaluate success.
- Any team that creates an experiment must monitor the metrics and customer reactions (support calls, social media reaction, forum threads, etc.) while the experiment is in progress and abort it if something catastrophic happens.
If you like notes like this, check out my bookshelf.