My last post gave an overview of what I do as a ‘Growth Specialist’ (my new job). If you haven’t read that yet, it’s a good introduction to this post.
I laid out the process for identifying and exploiting growth opportunities as follows:
- Examining the customer lifecycle for gaps or deficiencies (aka potential improvements),
- Coming up with hypotheses for improvements, and then ranking them,
- Designing and executing experiments to test these hypotheses, and
- Implementing successful changes permanently.
- Finally, repeat this process (forever).
This process should be thought of as a loop - it continues forever as you continue to learn and experiment.
This post will describe each step in a bit more detail.
Examining the Customer Lifecycle for Gaps
When examining the customer lifecycle, I typically operate under Dave McClure’s Pirate Metrics framework of Acquisition, Activation, Retention, Revenue, Referral (so named because the acronym is “AARRR” - sounds like a pirate!).
From Dave’s Pirate Metrics slide deck:
I start by plotting all the steps of the customer journey for our particular product (or products) under these categories.
This is where experience with other software products and knowledge of your customer is important. You need to make a judgment about where you are weak compared to others.
Data can drive some of this analysis. You can find reference metrics from various sources; - Baremetrics provides some open SaaS metric benchmarks.
But figuring out where you are weak can be difficult. Even more complicated is figuring out why you are weak. There are often many potential explanations.
High churn numbers (many customers canceling) can indicate that you’re acquiring the wrong type of users. It could also mean your activation experience sucks (the experience a customer has when starting to use your product), or you can’t retain customers over the long term (product doesn’t continuously engage them or provide value).
Good user metrics and poor conversion from trials to paid plans can indicate your pricing isn’t aligned with the value you’re providing, or you aren’t conveying it well enough. It also might mean you need to adjust the features each plan provides.
But to some extent, identifying areas of deficiency (and solutions) is based on intuition developed with experience and exposure to other products.
Coming Up with Hypotheses and Priorities
After we’ve identified the areas we would like to improve, we brainstorm improvements.
Sometimes this means individual brainstorming, and often it includes looking through the backlog of suggested product improvements.
Most of the time I do brainstorming on my own, and then interview some of the key stakeholders - our product manager, our design lead, and whoever else I think might have an important perspective.
For large changes, we hold a ‘midi-design’ - a session at lunch that’s open to everyone and focused on a particular design change.
Once we have a list of ideas and hypotheses, we score them based on the probability of success, potential impact and reach, and the difficulty to implement (adapted from Andrew Chen’s scoring framework). These are multiplied to give a final score.
This gives us a list of ranked experiments.
Designing and Executing Experiments
At this stage, we take the hypotheses from the previous step and figure out how we can test them as quickly as possible.
For startups at an early stage, this can be tough. There may not be enough data to make statistically valid conclusions. Usually, you need qualitative data as well, or to expect the experiment to take longer. In the early days, you may just have to assume.
All stages of the growth process are cross-disciplinary by default, but this stage requires deeper input from each team.
Most experiments are going to require development (coding) work to implement. Most will require design work. Data science has to be consulted in predicting how long an experiment should run and to make sure the right things are measured.
Once everyone has been consulted, we have a plan in place to prepare and execute the experiment.
While I won’t talk about it too much here, we typically have two main types of experiments in the pipeline.
The first is a big change/big impact type of experiment.
These are things like adding new features, significantly changing the onboarding flow, or adding a referral system. They are typically one-off, and sometimes take a longer period of time to test.
The second type of experiments are more limited in scope, but happen on a regular basis and require less work.
Often these are A/B tests of elements that always exist, but we’re trying to consistently improve. This might be changing the layout of the home page, adding small elements to the onboarding flow, or testing different copy on our buttons.
Often they can be implemented without significant dev work, using A/B testing software like Google Optimize or Optimizely. With these tests, we are looking for consistent, incremental improvements over time.
Fully Implementing Successful Changes
Depending on the test, we may have hacked something together or tested on a limited group of customers. This might be an A/B test with a limited customer group, or we may have tested a new feature in a specific country.
What makes a test successful? It varies.
Sometimes we’ve seen a statistically significant improvement, in which case the decision is easy.
Often, however, it’s a combination of quantitative and qualitative metrics.
If changing how our pricing page is structured, we may see no change in overall conversion of customers, but hear from the customer success and sales teams that they’re getting much less confusion from potential customers.
If we deem the experiment a success, we make the change permanent, which may require further dev work, and then document the results.
Documenting the results is a key part of this process; if experiments are completed, but the results aren’t documented, there’s a good chance the learning will be lost in future, and the experiment will be repeated, wasting resources.
Repeat, Repeat, Repeat
As I mentioned, this process is never done; it’s meant to be a continuous loop. Once you’ve investigated one hypothesis (via experiment), you implement changes (or not), document what you learned, and move on to the next hypothesis.
As someone concerned with growth in startups, you should be looking for both dramatic, big wins, and small, but high-velocity improvements that add up over time.
Don’t forget the ultimate goal: to grow.