The Growth Process

A framework for growth teams used at Atlassian.

Make the experiment a permanent fixture by implementing it in product. If we find a successful result, we’ll want to make the changes permanent, so that we really reap the benefit of the experiment. The onus is on the team which ran the experiment to also productionize it if it’s successful.

1. Preparation

We can’t jump in blindly though. Experiments are often built quite differently to permanent product changes, so we need to consult with the relevant product teams to get their advice and estimate the cost of implementation. The Development Team Lead is listed as being responsible to make this happen, although it could be done by the lead developer on the experiment if they are keen.

After weighing up the cost and benefit, we may decide that the experiment is not worth productionizing. If that’s the case, we either need to ditch the idea (and keep it in mind when any platform changes come along which may make it easier to implement), or re-run the experiment with some variations which would make it easier to implement.

If we decide to proceed, the Product Manager and Design Team Lead need to get buy in from the relevant product teams. This is an opportunity for the product owners to give feedback on the experiment and make sure it’s compatible with any changes coming in the product. Ideally, we want to keep the changes to a minimum. We should think of this stage as fine tuning, rather than a re-design. The further we deviate from the original experiment, the less likely we are to see the same result.

2. Building

The designers may want to perform some polish on the experiment designs, but again, we want to keep the changes to a minimum. If there are ideas for bigger changes, they should be run as a follow up experiment once the original experiment is productionized.

The Development Team Lead will need to make space in their backlog to get this built. It may be that the developers who built the experiment also productionize it, but it doesn’t have to be. The advantage is that the original developers are already familiar with the experience they are building. If the implementation requires a particular skill though, there may be someone else in the team who will be better suited to building it.

To get the changes built, we should follow the process of the team who owns the codebase. This might include a QA demo, and most likely a pull request to be reviewed by the codebase owners.

Wrap the changes behind a feature flag, so that they can be rolled out in a measured fashion. The underlying product and steps higher up in the funnel may have changed since the experiment ran, and we may have made adjustments to the experiment experience when building it, so we need to verify that we observe the same impact.

3. Deployment

A period of warranty needs to be established with the product owners (e.g. 6 months), during which time the team building the changes are accountable to fix any bugs that arise in the wild. After that period the product owners will need to respond to any bugs.

Measure the changes by rolling them out progressively. Turn on the feature flag for a subset of customers and measure the difference with a control cohort. There there’s three scenarios to deal with:

  • Positive statistical significance - Same as the experiment result! Life is good.
  • No statistical significance - Think about whether we should keep this change or not. Is it worth the maintenance cost? Are there improvements we could make before measuring it again?
  • Negative statistical significance - Remove or iterate. It can’t be kept if the negative result is observed for iterations.

Once the experiment is productionized, it’s worth considering improving it even further by running more experiments. Think of the productionized experiment as the new baseline to beat!

Output

Productionised feature

Driver

Development Team Lead, Product Manager and Design Team Lead

Checklist