Analytics8 Rules of A/B Testing – The Art in Marketing Science

8 Rules of A/B Testing – The Art in Marketing Science

Data will tell you the right answer. If you can’t find data somewhere, you should run a test, collect the data, and let it tell you what’s right. A/B testing is one of the core marketing arts a marketer should master and practice.

“Data! Data! Data!” he cried impatiently. “I can’t make bricks without clay.”
-Sherlock Holmes, “The Adventure of the Copper Beeches”

boxing-match-cartoonData will tell you the right answer. If you can’t find data somewhere, you should run a test, collect the data, and let it tell you what’s right.

Testing is one of the core marketing arts a marketer should master and practice.

You should treat A/B testing as a competition between two players – a champion and a contender. You define the rules, set up the arena, announce the players and let the data be the judge.

You don’t need fancy software, complex features, or advanced algorithms; all you need is some common sense and to follow a few basic rules. Below are eight rules to consider when A/B testing.

1. Hypothesis

Every test starts with a hypothesis that you’re trying to prove or refute. The hypothesis is a short sentence that summarizes what you’re trying to prove and should include the tested variable and the success metric that determines the winner.

Your hypothesis (e.g., “A quote on the landing page will increase conversion rate.”) defines the parameters of the test and focuses on the test variable and its effect on the result. Write it at the top of your testing document.

2. One Variable

By its nature, in A/B testing we’re testing only one variable. This means that everything else must stay constant.

For example, if you test a subject line on an email, all other variables must be the same – email copy and creative, from line, time of send, landing pages, etc. If anything else will be different between the two versions, your results won’t be conclusive because you wouldn’t be able to attribute the success of one version to one specific variable.

A/B testing doesn’t mean that you only have two players; it just means that you have one variable.

3. Clear and Aligned Success Metric

Before you run the test, decide how you’re going to measure success. Like with any competition, there’s only one-way to win (points, votes, time, etc.), and A/B tests should be the same.

Define one success metric that will determine the winner based on the effect you are trying to get. The success metric and the variable should be as aligned as possible.

For example, if you’re trying to increase conversion rates on a landing page and decided to test the effect of the number of form fields, then your success metric must be conversion rate and the tested variable will be number of form fields. Your hypothesis then will be “The more fields on a form, the lower the conversion rate.”

One of the most common mistakes with A/B testing is to look at multiple success metrics and decide which one is the important one after the test ended. This is like saying to a basketball team that they lost the game because they had fewer steals even though they had more points.

4. Volume and Statistical Significance

For a successful test you have to have enough volume to make the data statistically significant. The volume needed isn’t just in the test groups, but also in the results and the difference between them.

For example, if you run an email test and send it to 5,000 people and your success metrics is clicks on the email, the volume you need to achieve should be on the number of clicks and the difference between the results as a percent of the size of the test.

You can read more on statistical significance in this great post from Avinash Kaushik and use this great Excel from Rags Srinivasan.

5. Test Group and Splits

champion-trophyVolume should be applied not just to the overall size of the test but also to the test group.

You can decide to do an even split between the control and test groups (50/50) or apply an uneven split of up to 95/5.

If you have a clear champion, a variable that you are confident in its success, (for example, a from-name that has yielded a high open rate) that you want to test, apply a small, uneven split (90/10) to the test. This way you don’t harm your performance just in the sake of testing.

If you start a test with no clear champion, start the first test with an even split.

6. Randomization

Since you’re only testing one variable, you want to eliminate the variables in the audience selection process. Your control and test groups should be picked randomly.

A random sample is a sample chosen that allows all subjects an equal probability of being selected. Don’t apply pseudo-random processes that will skew your results like location, time zone or titles. Those are variables that can be tested and shouldn’t be used to build your random test groups.

You can use random numbers (if you’re using Salesforce, there are multiple solutions and ideas that can help you do that, here’s one), random selection (you can do that in excel), or truly random factors like the first letter of the email address.

7. Always be Testing but Apply Common Sense

While everything is testable, not everything should be tested. Always use common sense.

Use best practices and existing data to tell you what’s already known to be working and you can focus on testing variables you’re really not sure about. You should also test variable you believe will increase your performance and not ones that will have a marginal effect.

Use testing to make educated decision and improve performance, not just for the sake of testing.

8. Documentation

This is one of the most neglected elements in testing in which software can help.

If you’re diligent about testing, you should be fanatic about documenting your tests and results. This will help you build upon your past lessons, not repeat tests and educate your employees and successors.

If you’re not worried about publishing your results, writing a short blog post on major tests you’ve ran can be a great way to document and make sure you don’t forget your results. Here’s an example from Optify on a quick documentation of a test we ran on Twitter’s Auto DM.

A Few Final Words of Wisdom

If you’re a marketing athlete (a professional generalist, not a specialist), you need to know how to test and test. Keep in mind that testing is a mean to improve performance and shouldn’t come at the expense of the experience and value you’re providing your leads, prospects, and customers.

Resources

The 2023 B2B Superpowers Index
whitepaper | Analytics

The 2023 B2B Superpowers Index

9m
Data Analytics in Marketing
whitepaper | Analytics

Data Analytics in Marketing

11m
The Third-Party Data Deprecation Playbook
whitepaper | Digital Marketing

The Third-Party Data Deprecation Playbook

1y
Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study
whitepaper | Digital Marketing

Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study

2y