3 Rules to Actionable Metrics in a Lean Startup

Written by Ash Maurya

First, what is an actionable metric?

An actionable metric is one that ties specific and repeatable actions to observed results.

The opposite of actionable metrics are vanity metrics (like web hits or number of downloads) which only serve to document the current state of the product but offer no insight into how we got here or what to do next.

In my last post, I highlighted the importance of thinking in terms of Pivots versus Optimizations before product/market fit.

Pivots are characterized by maximizing learning while Optimizations are characterized by maximizing efficiency.

This distinction carries over to metrics too. As we’ll see some metrics matter more than others depending on the stage of the company but more importantly, it’s how these metrics are measured that make them actionable versus not. I’ll share my 3 rules to actionable metrics, derived from Lean Startup principles, and specifically focus on what metrics I measure and how I measure them.

Rule 1: Measure the “Right” Macro

Eric Ries recommends focussing on the macro effect of an experiment (such as sign-ups versus button clicks) but it’s just as important to focus on the right macro. For example, spending a ton of effort to drive sign-ups for a product with very low retention is a form of waste.

Identify Key Metrics

The good news is that there are only a handful of macro metrics that really matter and Dave McClure has distilled them down to 5 key metrics. Of the 5, only 2 matter before Product/Market Fit – Activation and Retention.

Key Metrics Before Product/Market Fit

Before Product/Market Fit, the goal is validating that you have built something people want. You don’t need lots of traffic sources to support learning and people don’t usually refer a product unless they have used and like the service. So both Acquisition and Referral can be tabled for now. What does correlate with building something people want is providing a great first experience (Activation) and most important of all, that they come back (Retention).

Note: Some of you might have noticed that I swapped Revenue with Referral from Dave’s version. This is because I believe in charging from day 1 which more naturally aligns (but does not replace) Revenue with Retention.

Map Metrics to Actions

The next step is to map specific actions in your product to Activation and Retention.

Activation Actions
Activation actions typically start with your sign-up process and need to end with the key activities that define your product’s unique value proposition.

Activation Actions Mapping

Note: The “Tell Friends” here is used to publicize shared galleries and I don’t count it towards “Referral”. I view “Referral” actions as being more deliberate endorsements of the product such as through an affiliate program.

Retention Actions
There are a number of ways to define retention and Andrew Chen even draws a distinction between retention and engagement. Personally, I prefer to tie my retention action to the key activity that maps to the UVP.

Retention Actions Mapping

Rule 2: Create Simple Reports

Reports that are hard to understand simply won’t get used. Similarly, reports that are spread across pages and pages of numbers (ahem Google Analytics) won’t be actionable. I am a big fan of simple 1-page reports and funnels are a great format for that.

Funnel Reports – The Good, The Bad and The Ugly

Funnels are a great way to summarize key metrics: They are simple, visual, and map well to the Activation flow (and Dave’s AARRR startup metrics in general). Here is an example of a funnel for a service that is offered under a 14 day free trial.

Conversion Funnel

But funnel analysis, as implemented by analytics tools today, have several shortcomings:

Tracking long life-cycle events
For one it is hard to accurately track long lifecycle events. Almost all funnel analysis tools use a reporting period where events generated in that period are aggregated across all users. This skews numbers at the boundaries of the funnel. But more importantly, because you are constantly changing the product, it is impossible to tie back observed results to specific actions you might have taken a month ago.

Tracking split tests
A more serious manifestation of the same problem is tracking split-tests for a macro metric like Revenue which also has a long life-cycle. An example of an experiment I am currently running is studying the long-term consequences of offering a Freemium plan alongside a Free Trial plan. I believe that a properly modeled Freemium plan should behave like a Free Trial. The only difference is that Free Trials have a set expiration while with Freemium, a user outgrows the service after some time X. I can guess with reasonable certainly that I will get more sign-ups with Freemium, but the bigger questions are whether that will also translate to more Retention/Revenue. If so, what is the average time to conversion (time period X)? I can’t answer these types of questions with the current funnel tools.

Measuring Retention
And finally, funnel tools don’t provide a way to track retention which by definition needs to track user activity over long periods of time.

Funnels Alone Are Not Enough. Say Hello to the Cohort.

So while funnels are a great visualization tool, funnels alone are not enough. The analytics tools today work well for micro-optimization experiments (such as landing page conversion) but fall short for macro-pivot experiments.

The answer is to couple funnels with cohorts.

Cohort Analysis is very popular in medicine where it is used to study the long term effects of drugs and vaccines:

A cohort is a group of people who share a common characteristic or experience within a defined period (e.g., are born, are exposed to a drug or a vaccine, etc.). Thus a group of people who were born on a day or in a particular period, say 1948, form a birth cohort. The comparison group may be the general population from which the cohort is drawn, or it may be another cohort of persons thought to have had little or no exposure to the substance under investigation, but otherwise similar. Alternatively, subgroups within the cohort may be compared with each other.

Source: Wikipedia

We can apply the same concept of the cohort or group to users and track their usage lifecycle over time. For our purposes, a cohort is any property attributed to a user that we wish to track. The most common cohort used is “join date” but as we’ll see this could just as easily be the user’s “plan type”, “operating system”, ‘sex”, etc.

Lets see how to apply cohorts to overcome the shortcomings with funnels we covered up above.

Tracking Long LifeCycle Events

The first report I recommend implementing is a “Weekly Cohort Report by Join Date”. This report functions like a canary in the coal mine and is a great alerting tool for picking up on actions that had overall positive or negative impact.

Conversion Funnel

Weekly Cohorts

You group users by the week in the year they signed-up and track all their events over time. This report was generated from the same data used in the funnel up above (which I’ve shown again for easy comparison). The key difference from the funnel report is that other than the join date, all other user events don’t need to have taken place within the reporting period. You’ll notice immediately that a lot of the conversion numbers (especially Purchased) are quite different because a cohort report doesn’t suffer from the boundary issues with simple funnel reports.

More importantly though, the weekly cohort report more visibly highlights significant changes in the metrics which can then be tied back to specific activities done in a particular week.

Tracking Split Tests

Apart from reactive monitoring of the funnel, cohorts can also be used to proactively measure split-test experiments. Here is a report that uses the “plan type” as a cohort for the “Freemium” versus “Free Trial” experiment I described up above.

Split Testing Cohorts

Disclaimer: My Freemium versus Free Trial experiment is still underway and these results are made up.

You can see that while activations are higher with the Freemium plan, Revenue (so far) is lower. That may change over time and it’s important to know the average time to conversion so the Freemium plan can be modeled accordingly.

You can create a cohort out of any user property you collect and run reports to uncover questions like:

1. Do mac users convert better than windows users?
2. Do certain search keywords convert better than others?
3. Do female users convert better than male users?
etc.

Tracking Retention

Now for the most important metric that matters before Product/Market Fit – Retention. This report is also generated using a weekly cohort by join date but instead of tracking conversion, it tracks the key activity over time.

We only track “Activated” users which is why all the Month 1 retention values are 100%. A Retention report can quickly tell you if you are moving in the right direction towards building a product people want or simply spinning cycles.

Rule 3: Metrics are People too

Metrics can only tell you what your users did. They can’t tell you why. A key requirement for making metrics actionable is that you should be able to tie them to actual people. This is not only useful is locating your most active users but more importantly for troubleshooting when things go wrong.

This last part is particularly important before product/market fit when you don’t have huge numbers of users and need rely more on qualitative versus quantitative validation.

Here is an example where I am able to extract the list of people that failed to complete the download step of my funnel. Armed with this list, I don’t have to guess what could have gone wrong. I can pick up the phone or send out an email and simply ask the user.

Mapping People to Metrics

How do I Create these Reports?

I alluded before that most analytics tools are better suited for micro-optimization experiments versus macro-pivot experiments which actually makes sense because optimization is (typically) a post product/market fit activity and where the money is at.

I have been an early user of both KISSmetrics and Mixpanel and while both of these tools are good at Funnels, they fall short when it comes to Cohort Analysis. Mixpanel does currently support retention cohort reports but not funnel cohorts, and I know Hiten from KISSmetrics is definitely thinking hard about cohorts. So hopefully we’ll see something soon there.

That said, I was really struggling with tracking my Freemium versus Free Trial split test so as an experiment, I decided to spend an afternoon building my own homegrown cohort analysis tool based on the conceptual People/Events/Properties model I learned from using KISSmetrics. All the reports you see here were generated using that.

UPDATE – Jan 2013: For an extended treatment on actionable metrics,
CLICK HERE for my short email course on Innovation Accounting.



Become a Practice Trumps Theory member (it's free).

You'll get access to even more tactical content, videos, and hands-on techniques for systematically building a product.


Sign Me Up!



  • Pingback: What We Learned from Winning NWEN’s First Look Forum

  • http://www.facebook.com/john.emmatty John Emmatty

    Great article which covers all the fundamentals of creating an actionable metrics for an early stage startup, I’ve learned from this article that the most relevant key metrics prior to a market fit are activation and retention, but i read somewhere else that you realize that you hit the product/market fit when the referrals go up. In that case, shouldn’t we have to track referrals along with activation and retention? Feel free to correct me if i am wrong.

    [Reply]

    Ash Maurya Reply:

    Activation and Retention are grouped as value metrics. For any product, you need to first validated your value metrics, then your growth metrics. Acquisition, Referral, and more Retention create the foundation for growth.

    For more, read up on the Engine of Growth concept in Eric Ries’ book. I build up on it in my book and in this video: 10 Steps to Product/Market Fit – http://www.youtube.com/watch?v=1Ge_bb8JJ90

    Cheers.

    [Reply]

  • Matt Jordan

    Thank you for this info!

    [Reply]

  • John

    great article

    [Reply]

  • Pingback: Why Trials Might Be The Most Important Marketing Tool You Have | Marketing Engine

  • Pingback: The Laws of Conversion - ThinSlices

  • Eddy Jackson

    Hi Ash, Thanks so much! I am currently setting up my own Weekly Web App Performance Report by Join Date. When you say “The key difference from the funnel report is that other than the join date, all other user events don’t need to have taken place within the reporting period. You’ll notice immediately that a lot of the conversion numbers (especially Purchased) are quite different because a cohort report doesn’t suffer from the boundary issues with simple funnel reports.” Do you mean that you record the specific events that have taken place in that week only. So if a user signs in but doesn’t convert until the week after then this conversion is allocated to the next week? Or you allocate events that only take place in that week?

    [Reply]

  • Pingback: Lean Canvas: 3 razões para usar | Multiplicadores

  • Henry Kobutra

    Great Post … even from 4 years ago :)

    [Reply]

  • Pingback: Actionable Metrics – Say Hello to Cohort Analysis | Hornet Dear Bernard

  • lol2

    How about non-website type of business? After reading the book I am still wondering not only how to track down from firsta ctionable metric to last one (retention or whatever it would be) but even how to track between the first and the second layer (aka: first contact, ask for free trial…etc). People are too cautios with their personnal info and it seems quite hard to me to figure out a way to establish actionable metrics to track down my consumer behaviour. Thanks in advance for any comments people may have

    [Reply]