Feb 2, 2023
Startup Growth
Read Time Icon - Portfolio Z Webflow Template
8
 min read

5 Steps to Teach Yourself Growth Marketing (Part IV)

Without customers there is no business. So how does an owner drive new customers to their startup, or keep existing ones engaged? The answer is simple: Growth marketing.

It’s not a secret that growth marketing is a valuable skillset to possess in the current job market. Taking a look at LinkedIn in October of 2022 for the available jobs with the search phrase “growth marketing”, shows that there are more than 50,000 openings spread across a variety of employers from small startups to 30,000-employee behemoths like Uber

The inevitable next question becomes, how does one learn the skills for growth marketing?  I am here to tell you that the best growth marketing course isn’t an actual course.

As a growth marketer who has honed this craft for the past decade and been exposed to countless courses, I can confidently attest that with this subject, doing is the best form of learning. However, I am not saying you need to immediately join a series-A startup or land a growth marketing role at a large corporation, who can then afford to teach you. 

Instead, I have broken down how you can teach yourself growth marketing in five easy steps. Sit back, relax, and I hope you will enjoy this series!

Teach Yourself Growth Marketing-Part II: A/B test growth experimentation

In part four of my five-part series (Teach Yourself Growth Marketing), I’ll take you through a few standard A/B tests to begin with, then show which tests to prioritize once you have assembled a large enough list, and finally how to run these tests with minimal external interference. For the entirety of this series, we will use the example of learning growth marketing with a direct-to-consumer (D2C) athletic supplement brand.

A core principle that differentiates your typical advertising programs with growth marketing is the use of heavy data-driven experimentation fueled by hypotheses. Lets cover growth experimentation in the form of A/B testing

How to properly A/B test

To start: A/B testing, or split testing, is the process of sending traffic to two variants of something at the same time and analyzing which variant performs best. 

In fact, there are hundreds of different ways to invalidate an A/B test and I’ve witnessed most of them while consulting for smaller startups. During my tenure leading the expansion of rider growth at Uber, we utilized advanced internal tooling solely to ensure that tests we performed ran near-perfectly. One of these tools was a campaign name generator that would keep naming consistent so that we could analyze accurate data when the tests had concluded.

Some important factors to consider when running A/B tests:

  • Do not run tests with multiple variables
  • Ensure traffic is being split correctly
  • Set a metric that is being measured

The most common reason for tests being invalid is the presence of confounding variables. At times it isn’t obvious, but even testing different creative in two campaigns that have different bids can skew the results. Make sure when setting up your first A/B test that there’s only one difference between the two email campaigns or data sets being tested. For example: If you want to test whether emojis perform best in your email subject lines, only add an emoji to one sample without changing the rest of the copy between the two variants. After you’ve selected which variable to test, you’ll want to confirm traffic is being split correctly and evenly between variants. If you’re testing on an email platform like Mailchimp, they automatically possess tools to help split email traffic evenly. However, if you’re running a test on a channel like Facebook, the easiest way to split traffic is to manually separate the recipients, while keeping the budget even between variants. [JH1] 

Finally, make sure that you have a metric you’ll use to measure the success of your test. If you’re testing subject lines in an email, the correct metric to goal on would be the open rate of the emails. By determining this metric at the start, you will make picking a winner much faster once the data starts to come in. You may also wish to consider testing some secondary metrics and looking further down-funnel. For example, if you’re testing ad creative and produce a click-bait asset, it may artificially boost the click-through-rate of the ad, but the conversion rate down-funnel may potentially suffer in comparison to the control assets. As this example showed, it is important to consider secondary metrics and not always rely on a single primary metric for measuring impact.

Start with these tests 

If we were brainstorming the types of tests to run for an athletic supplement brand, here’s a quick, basic list that I’d begin with: 

  • Emoji vs no emoji in email subject line
  • Text-only email vs header image in email
  • Value prop 1 vs value prop 2 in ad creative

As I said, these are basic tests and the reason I’d recommend them for a starting point is that these tests won’t contain confounding variables and are simple to set up. For our athletic supplement brand, below is an example of what an emoji subject line test could look like:

  • Control: Get your greens in one pill 
  • Test: Get your greens in one pill 💊 

To showcase what a deceivingly straightforward test would look like, we will use a test where we’re looking to measure the difference between males and females in ad creative. To control for confounding variables in this test, we would need to make sure that both the male and female actors shot their videos in the exact same spot, with minimal differences in pitch for the script, and controlling for many more potential variables. This is an example of a test you’d likely run with more than one female and male before making a final decision on which segment performs best. Not a great first test unless you’re feeling the need for a challenge!

Selecting winners 

Now for the fun part, it’s time to analyze and select your test winners. Most importantly, you want to make sure you have achieved statistical significance (stat-sig) before determining a winner. Stat-sig tells us whether our result is potentially due to chance or if we have a consistent winner.

There are many stat-sig calculators on the web and the one that I use is Neil Patel’s calculator.

If you ran the email subject line test with an open rate test metric, enter your sends and opens in a stat-sig calculator to determine if you can call the winner. When I was working at Coinbase, we were comfortable with a 70-80% minimum confidence level on our stat-sig calculator.

Prioritizing growth tests

You’ve launched your first A/B test but now have a plethora of ideas that are springing to your mind. That’s a great problem to have and the beauty of growth marketing. While I was leading fleet acquisition at Postmates, I had to constantly prioritize which tests we’d run due to the abundance of ideas and growth mediums we were examining. When a large list of good ideas is paired with the limited bandwidth of a startup, the need for thoughtful prioritization becomes paramount.

A quick eyeball test often works when ranking the list of tests but if you’d prefer a more methodical approach, consider using RICE (Reach, Impact, Confidence and Effort).

Example of a RICE scoring spreadsheet. Image courtesy of Jonathan Martinez.

In the example above, Test 3 has the highest RICE score. This was calculated by multiplying Reach, Impact, and Confidence, and then dividing their score by Effort. This means that Test 3 should be prioritized as the first test to conduct, given the high impact and low effort for launch.

One best practice to get in the habit of is to maintain a repository sheet that tracks all your performed tests and their results. This sheet can serve as an information bank if needed later.

Now that you have a few growth tests under your belt, let’s dive into which metrics matter the most for your startup in the final part of our series.

Subscribe to my growth newsletter

Get notified the moment I publish a new article.

Subscribe