Learn how to raise your odds of success

 

3 Rules for Building Features in a Lean Startup

Written by Ash Maurya

Validated learning about customers is the measure of progress in a Lean Startup – not lines of working code or achieving product development milestones.

So lets take a look at where in the product development process do we do this type of learning:

Where we learn about customers

While some learning happens during the requirements stage (driven by customer development activities), most of the learning happens only after we ship a release, with very little learning during development and QA.

Even though building a product is the purpose of a startup, product development actually gets in the way of learning about customers.

While we can’t eliminate development/QA or increase customer learning during those stages, we can shorten the cycle time from requirements to release so we get to the learning parts faster. That is exactly what Continuous Deployment does.

I’ve written about my transition from a traditional development process to Continuous Deployment as a case-study on Eric Ries’ Startup Lessons Learned blog. The key concept in Continuous Deployment is switching from large batch sizes to small batch sizes. For me that meant switching from releasing every 2 weeks to releasing every day. You can’t always build a feature in a day but you get good at building features incrementally and deploying non-user facing feature first. The result was an immediate and noticeable improvement in cycle time, an acceleration in feedback, and most importantly more time for non-product development activities like learning.

But even with a streamlined product development flow, how do you make sure you’re actually building what customers want and not simply cranking out features faster.

Here are some rules I use:

How I build features

Rule 1: Don’t be a feature pusher

If you’ve followed a customer discovery process, identified a problem worth solving, and as a result defined a minimum viable product, don’t push any new features until you’ve validated the MVP. This doesn’t mean you stop development, but rather most of your time should be spent measuring and improving existing features and not chasing after new shiny features.

From experience, I know this can be a hard rule to enforce. Many of us still measure progress in lines of working code and believe our problems with traction are rooted in not finding the right killer features. The next rule helps with that.

Rule 2: Constrain the features pipeline

A good practice for ensuring the 80/20 rule is constraining the features pipeline, which is a common practice from Agile, but with the addition of a validated learning state for every feature.

Ideally, a new feature must be pulled by more than one customer for it to show up in the backlog. It is okay to experiment with some new features that come straight out of your head but I still try and find ways to test them with a few customers first (in a demo, product presentation) before committing them to the backlog.

Passion around a vision is good.
Passion around building what customers want is better.

The number of features in-progress is constrained by the number of developers and so is the number of features waiting for validation. This ensures that you cannot work on a new feature until a previously deployed feature has been validated.

Rule 3: Close the loop with qualitative metrics first

Because quantitative metrics can take some time to collect, I prefer to validate all features qualitatively first. If I don’t get a strong initial signal, the feature is nixed immediately. Otherwise, it lives until the quantitative data is in.

For qualitative feedback, I’ll contact the customer (or customers) that requested the feature once it goes live and ask them for feedback. Following Eric’s advice on focussing on macro effects, it’s not enough to just test the “coolness” factor of the feature, but rather test if this feature solves the customer problem ,and even more importantly, whether it can make or keep the sale. I also tend to highlight new features in face-to-face usability tests and periodic “release-update” newsletters so they get more attention.

On the quantitative side, I still use a combination of KISSmetrics and Mixpanel to collect usage data on the feature.

Counterintuitive ?

Some of these ideas, such as using small batch sizes, constraining the features pipeline, and forcing a stop if we aren’t learning, seem counterintuitive at first. Most of us have been trained to specialize into departments, like development, QA, and marketing – turning ourselves into efficient large batch processing machines running at full capacity.

This mode of working doesn’t work even in the manufacturing world where you have more predictive (less variable) and repetitive tasks. Building software, on the other hand, is highly variable and problems with large batches are further exacerbated. For the technically inclined into why this is so, I highly recommend Donald Reinersten’s book on “The Principles of Product Development Flow”.

In a Lean Startup, Product Development isn’t just about the code anymore. Everyone is responsible for learning about customers.


Update: If you liked this content, consider checking out my book: Running Lean which dedicates 50 pages alone on this topic.

You can learn more here: Get Running Lean.

Become a Practice Trumps Theory member
(it's free)
OptinMonster allows you to explode your email list by using our incredible exit intent technology.
  • http://www.facebook.com/profile.php?id=607918 Brian Wang

    Excellent insight as always Ash!

    I think many of us have the inclination to focus on releasing as much as possible without enough emphasis on capturing how customers respond to each release. Before learning whether feature X was a net positive or not, we concentrate our efforts on the next update because we convince ourselves that a release = progress. I suspect that the “release early, release often” mantra is often misused in this context.

    [Reply]

  • http://www.ashmaurya.com/ Ash Maurya

    Yes, “release early, release often” can be quite harmful to a startup. I made this mistake with my last product BoxCloud. After I launched, I started getting lots of feedback and feature requests (a good problem) but I had no way to gauge which ones to follow. I used a simple popularity count, which while better than nothing, does not necessarily reflect the “right” features. The end result was lots of unused features over time which created more work to maintain.

    I have since then contacted all my paying customers and stripped the product back down to bare essentials and don't add anything unless it meets the criteria above.

    [Reply]

  • http://www.dirtyphonebook.com Benjamin Willer

    We took the opposite approach of release early release often with http://www.dirtyphonebook.com and released an amazingly polished product and eco-system with the first release. There are different ways to approach any problem and it depends on what your business goals, resources, expertise, and other facts indicate is the best approach

    [Reply]

  • http://myOnePage.com/joel Joel Gascoigne

    Great stuff Ash, and this is perfect timing at least for me because we are going through this right now with OnePage, and we recently made the decision to put more time into our MVP before we build additional features.

    You've inspired me to step up the functional tests I have in place too. I have a few selenium tests but I need to put more in place. Do you do functional tests for every new piece of functionality or change to existing feature, or do you have some kind of reasoning behind how much testing to do? Also, you mentioned Selenium Grid and Go Test It, wondering if you've tried SauceLabs since I've had good experience with that.

    Keep up the good work Ash, I'm hosting a Simulcast of the Startup Lessons Learned conference (Birmingham, UK) and am looking forward to hearing your case study.

    [Reply]

  • http://www.ashmaurya.com/ Ash Maurya

    Joel –

    Yes, we have a policy now of adding a functional test with each user facing update, but we started by prioritizing tests for the user activation flow first. This is the path users take when they first interact with the product.

    I've heard lots of good things about SauceLabs but I haven't used them yet.

    [Reply]

  • alexandermimran

    Great post Ash!

    Totally agree that shortening the cycle makes sense.

    Wanted to add usability to the mix: While I think a lot can be discovered once a feature is released, it's sometimes tough to know whether it's failing because of a misalignment with user needs or a sheer inability to function or integrate properly with the rest of the user experience.

    It's best to get some usability testing before integrating new features. Really small startups can get away with using their user base as test subjects, but otherwise you risk not knowing why a feature is failing even though you validated it in the discovery process.

    Design should be solid and features should be delivered the way a user would want. Some users can't explain if something isn't designed properly so asking the right questions is key.

    Ash – what is your design process for bringing new features into your app? I haven't found a good article/post about this WRT customer development and think it's an important issue…

    [Reply]

  • http://myOnePage.com/joel Joel Gascoigne

    That sounds like a great policy, I'm going to try and emulate it. Thanks for sharing :)

    [Reply]

  • http://www.knowledgescreen.com Mark Dorosz

    Hi Ssh,

    This is a really interesting article and formalizes what I have found through experience. I launched Knowledge Screen about 5 months ago with view to focusing on corproate education but found we were having more paying customers coming from marketing. By minimizing investment in the solution offering we were able to hold back until we'd confirmed a market for our capabilities allowing our service offering to evovle rather than being tied down to a fixed product we'd spent 100's of man hours perfecting before launch.

    I think the same lean approach is also relevant to a startup's marketing. Beining able to change your website on the to fly to leverage what's “sticking” is a huge benefit and outweighs the limitations of having a simple worpdress site versus a complex Flash site that you spent thousands on to develop only to find its irrelevant to where your solution is going.

    Keep in touch
    Mark
    http://www.knowledgescreen.com

    [Reply]

  • http://giffconstable.com giffc

    thought I'd pipe in with two comments, Ash:
    1. “don’t push any new features until you’ve validated the MVP” — I basically agree with this, although in the case when you are still figuring out what MVP really looks like (emphasis on the viable), this gets grey. Viable might be changing what you have. It might mean adding something crucial that is missing. Judgement call.

    2. “Passion around a vision is good.
    Passion around building what customers want is better.”

    Not sure how I feel about the way you wrote that particular soundbite given how easily people misunderstand. You shouldn't necessarily build what customers ask for, but rather what you think is the best way to solve their problems. i.e. vision meets customer wants. Again, judgement calls, and very intertwined.

    [Reply]

  • http://twitter.com/ashmaurya Ash Maurya

    Giff –

    I agree on both points.

    A couple of points on #1:

    - I do leave open 80% on improving existing features. This can include things like usability, sign-up flow improvements, and even sub-features that supplement the original feature set if needed. But I do try and keep within the original top 3 problems (unique value proposition) unless they fail validation with customers. Then it's time to diagnose why and maybe revisit customer discovery. My definition of feature lines up more closely with the top 3 problems determined during customer discover and there isn't always a 1:1 relation between them i.e. how you solve a problem can vary a lot and it's fine to iterate on that as long as you aren't changing the underlying problem you set out to solve.

    - There is the additional 20% for new features which can be used to test the “grey” areas you talk about. But again they are not a pass for deviating too far from the original UVP you've already established unless you can prove it isn't working and why.

    On #2, I do draw a distinction between “what customers want” and “what they say they want”. There is a big difference and it's your job to distill that difference into the product.

    [Reply]

  • http://twitter.com/ashmaurya Ash Maurya

    Alex –

    Good point. I do use usability testing at various stages of the process including the initial customer discovery interviews as well as whenever I demo the product. The interface is the first thing I build (usually in HTML/CSS) and I spend a lot of time figuring out where and how to integrate it into the product. Once I've got it in, I'll demo it to some friendly's either as an upcoming feature or mock it up and show it to new customers as part of the product in a product presentation demo.

    It's only after this early feedback, that I'll code up the full feature.

    [Reply]

  • alexandermimran

    Cool… Sounds like an interesting process! Would love to hear more–Maybe in a future post?

    [Reply]

  • http://www.ashmaurya.com/ Ash Maurya

    Sure thing.

    [Reply]

  • alexandermimran

    Ha – sorry, I didn't mean to demand blog posts from you! Thanks so much though. Looking forward to learning more about your feature integration and decision process. 37signals does a great job of this: http://37signals.com/designexplore/

    [Reply]

  • http://www.knowledgescreen.com Mark Dorosz

    Hi Ssh,

    This is a really interesting article and formalizes what I have found through experience. I launched Knowledge Screen about 5 months ago with view to focusing on corproate education but found we were having more paying customers coming from marketing. By minimizing investment in the solution offering we were able to hold back until we'd confirmed a market for our capabilities allowing our service offering to evovle rather than being tied down to a fixed product we'd spent 100's of man hours perfecting before launch.

    I think the same lean approach is also relevant to a startup's marketing. Beining able to change your website on the to fly to leverage what's “sticking” is a huge benefit and outweighs the limitations of having a simple worpdress site versus a complex Flash site that you spent thousands on to develop only to find its irrelevant to where your solution is going.

    Keep in touch
    Mark
    http://www.knowledgescreen.com

    [Reply]

  • http://giffconstable.com giffc

    thought I'd pipe in with two comments, Ash:
    1. “don’t push any new features until you’ve validated the MVP” — I basically agree with this, although in the case when you are still figuring out what MVP really looks like (emphasis on the viable), this gets grey. Viable might be changing what you have. It might mean adding something crucial that is missing. Judgement call.

    2. “Passion around a vision is good.
    Passion around building what customers want is better.”

    Not sure how I feel about the way you wrote that particular soundbite given how easily people misunderstand. You shouldn't necessarily build what customers ask for, but rather what you think is the best way to solve their problems. i.e. vision meets customer wants. Again, judgement calls, and very intertwined.

    [Reply]

  • http://www.ashmaurya.com/ Ash Maurya

    Giff –

    I agree on both points.

    A couple of points on #1:

    - I do leave open 80% on improving existing features. This can include things like usability, sign-up flow improvements, and even sub-features that supplement the original feature set if needed. But I do try and keep within the original top 3 problems (unique value proposition) unless they fail validation with customers. Then it's time to diagnose why and maybe revisit customer discovery. My definition of feature lines up more closely with the top 3 problems determined during customer discover and there isn't always a 1:1 relation between them i.e. how you solve a problem can vary a lot and it's fine to iterate on that as long as you aren't changing the underlying problem you set out to solve.

    - There is the additional 20% for new features which can be used to test the “grey” areas you talk about. But again they are not a pass for deviating too far from the original UVP you've already established unless you can prove it isn't working and why.

    On #2, I do draw a distinction between “what customers want” and “what they say they want”. There is a big difference and it's your job to distill that difference into the product.

    [Reply]

  • http://www.ashmaurya.com/ Ash Maurya

    Alex –

    Good point. I do use usability testing at various stages of the process including the initial customer discovery interviews as well as whenever I demo the product. The interface is the first thing I build (usually in HTML/CSS) and I spend a lot of time figuring out where and how to integrate it into the product. Once I've got it in, I'll demo it to some friendly's either as an upcoming feature or mock it up and show it to new customers as part of the product in a product presentation demo.

    It's only after this early feedback, that I'll code up the full feature.

    [Reply]

  • alexandermimran

    Cool… Sounds like an interesting process! Would love to hear more–Maybe in a future post?

    [Reply]

  • http://www.ashmaurya.com/ Ash Maurya

    Sure thing.

    [Reply]

  • alexandermimran

    Ha – sorry, I didn't mean to demand blog posts from you! Thanks so much though. Looking forward to learning more about your feature integration and decision process. 37signals does a great job of this: http://37signals.com/designexplore/

    [Reply]

  • http://www.ashmaurya.com/ Ash Maurya

    On the contrary, I'm always looking for blog post ideas so thanks.
    I think it would make for a good post :-)

    [Reply]

  • http://www.antiquesrepublic.com/ Antique shop

    ciao!,
    it’s sometimes tough to know whether it’s failing because of a misalignment with user needs or a sheer inability to function or integrate properly with the rest of the user experience.

    [Reply]

  • Pingback: How We Build Features

  • Pingback: Its hard to stay single-minded on your offering « Web. Startups. Life

  • Pingback: Creating a Product Process that Works for Your Team | Skillshare Team Blog | Skillshare

  • Pingback: Drei Regeln für die Feature-Entwicklung in Lean Startups - //SEIBERT/MEDIA Weblog

  • Pingback: The Trough of Despond