There is a fundamental tension between speed and accuracy in growth engineering as there is in many other pursuits. On one hand, improvements are usually multiplicative, so the rate of growth increases with the pace of implementation. On the other hand, if you make changes that squash growth, progress will creep to a halt, so it’s important that you are quite certain those changes that you decide to keep are constructive. In this chapter, we will examine a number of different evaluation methods and discuss their merits as well as drawbacks, so you can develop a good intuition for when to employ those methods that will increase the likelihood of success within your resource constraints.
I like to think of the process of experimentation and testing as similar to finding one’s direction using a compass. There exists an optimal route from point A to point B, but without any guidance, the world of opportunities is three hundred and sixty degrees wide; the highest growth path could be anywhere. The challenge is that the needle on this compass does not tell you where to go with perfect certainty. Different evaluation methods provide you with various kinds of information and different levels of certainty about which course is best. Every time you carry out a new experiment and evaluate the outcomes, you get a new reading on which steps to take next or even when to step backwards.
The entire process of growth engineering starts with identifying what specific things about marketing, product implementation, or operations are not performing optimally and could be improved. Once you have identified what might be improved, the next step is understanding how they might be improved. The last step involves implementing enhancements and evaluating whether those changes actually caused positive changes to your growth metrics. Fortunately, there are methods that allow you to answer all three questions: what can be improved, how it can be improved, and whether it was improved. Evaluation methods in growth engineering can be sorted into three main groups: descriptive (what), qualitative (why), and experimental (whether).
Descriptive analysis is a form of quantitative research and tend to tell you what could be improved. An example of this is examining your website’s analytics reporting and finding that a product page has a very high bounce rate, meaning that many people leave right after landing on this page. This simple analysis implies that the product page (the what) is potentially a problem area. Simple analysis helps you identify a specific issue, but it does not provide more impactful insight, such as showing that a certain change caused a given outcome.
Qualitative evaluation is a group of activities that center around understanding why certain things might be suboptimal as well as how they can be improved. These methods are based on a conversation with the customer through direct interviews, observation, and surveys. Whereas descriptive analysis is a kind of smoke test, qualitative methods are more akin to investigative journalism. In the example above, you might interview website users to understand how they perceive the product page on your website. What do they find compelling, confusing, or missing? This kind of information will help you gain insight for how your product or customer experience can be fixed by understanding why it falls short.
Having employed descriptive evaluation to spot areas that could be improved and discovering likely ways to optimize them through qualitative methods, you are ready to implement specific enhancements. Continuing with the above example, let’s say that many customers that you interviewed stated that the imagery on the product page was not very compelling and didn’t really give them a sense of the product that you’re selling. Armed with that information, your team adds more compelling images of the product. Going back to your website analytics, you find that more people do seem to be staying on the page. You’ve succeeded, right? Not necessarily. It just so happens that you made this change in the weeks leading up to Christmas, so are customers more engaged because of the change that you made or because they are much more motivated to buy a present for their loved ones?
In order to evaluate if the changes that you made caused corresponding improvements in your growth metrics, you will likely need to experimental or quasi-experimental methods. The most common of these is A/B analysis given that many contemporary tools allow one to test two versions of a web page or marketing email by randomly displaying each version to a customer and measuring key metrics. Unfortunately, despite its popularity, A/B testing does not lend itself to certain contexts such as physical spaces, products, and human interactions. In those cases, businesses might have to turn to field experiments such as randomized control trials. The challenge is that those kinds of evaluations can be extremely costly and consequently should only be employed when the highest level of certainty is required.
The great balancing act here is between cost, time, and certainty. Let’s say that you are heading the growth team at a national bank and you find, through a survey, that many customers are unsatisfied with their experience at bank branches. Your team goes out to interview customers at a number of branches and find that certain themes emerge and customers tend to complain about the same things such as the long lines, lack of seating, unfriendly staff, and even the cold, industrial interior design. You decide that the first thing you’re going to try is training bank staff to act friendlier with customers. Now, how should you evaluate if this change positively affected the business. Do you need a large, fancy randomized control trial? Not necessarily. More positive sentiment in a survey or in customer interviews might be enough to convince you to implement this training nationwide. Then again, perhaps this training program will be extremely costly, and the executive team wants you to be dead-certain that it will be worth it. In that case, the only option might be a more rigorous statistical analysis.
The goal of the following sections is to educate you about common evaluation methods, their benefits, and their drawbacks.
This post is part of the Growthzilla Book series, which is an online draft of the print edition that will be available in 2018. Be sure to check back next Thursday to learn about descriptive evaluation methods. New sections of Growthzilla are published every week.