The Growth Process

A framework for growth teams used at Atlassian.

At this point you have awesome experiment ideas and the confidence that they’ll move your metric. This step validates that your experiment ideas can be run and will provide valuable results.

In preparation of running an experiment it’s important to ensure you’ll have valuable results as an outcome. With pre-analysis, you can avoid problems such as:

  • The experiment takes longer than expected to run, or in extreme cases may never complete.
  • Your experiment design targets a feature that is used rarely by uses, so it doesn’t have the impact you expect.
  • Events that you expect/want to use to measure success don’t already exist in the product or don’t work in the way you expect/need.
  • You proceed with an experiment that is too technically complex to execute relative to the value you expect to gain.

1. Hypothesis creation

Generate a hypothesis once you’ve chosen an experiment idea. The hypothesis should follow this format:

  • We believe that… your experiment treatment
  • Will result in… behavior you aim to change
  • Because… explanation of why you think this will happen
Example

We believe that prompting people to share the page if the update was important will result in more page shares because users are reminded about the importance of sharing and notifying.

2. Hypothesis creation

It’s critical to understand how to measure your experiment and how long it will take to run. This step is accomplished by doing a mock-analysis, where the analyst will do an analysis on existing data to ensure that you can correctly measure your goal.

  • Measure the size of your audience (the number of users that will see your experiment). Many experiments attempt to improve existing features with the expectation that altered usage of these features will drive a behavior. However, in some cases existing features are rarely used, making it impossible to achieve the desired changes. These experiment ideas can be discarded immediately or the design can be heavily modified.
  • Decide the minimum effective size that you want to measure. Typically big changes take less time to measure than small changes. As a general rule, the minimum effect size should be roughly equal to the minimum size you would be willing to implement. I.e., would a X% change in the metric be worth implementing given the expected cost of implementation?
  • Calculate the cohort size of your experiment. Given the sample size, current baseline, and minimum effect required, determine the number of users that need to interact with this experiment to get results.
  • Run a mock analysis. To verify that all of these factors are working as expected, perform an analysis on existing data. Your results (should be) negative, but you will have confidence that this experiment will run smoothly.
Example

Imagine that your experiment goal is to increase new users invited, and your experiment idea is to tweak the ability to invite new users.

  • Measure the size of your audience: Let's assume 20% of active users invite people to join each month.
  • Decide the minimum effective size: We can justify a relatively small change of 5% as implementing changes to the invite dialog are relatively easy.
  • Calculate the cohort size with a calculator

3. Kick off & design

The final steps prior to building your experiment:

  • Create your experiment and let your stakeholders know when you are planning to run it.
  • Experiments should typically be run exclusively to avoid bias so work with other teams to ensure that your experiment will have ample time to run and doesn’t conflict with other work.
  • Ensure the start/stop dates of your experiment are defined.
  • Complete the design of your experiment!

Output

An experiment page with fully validated ideas and experiment baseline.

Driver

Experiment lead (assistance from Analyst)

Checklist