Don’t miss out on better marketing results by failing to A/B test.
A/B testing lets you tweak any number of variables in your email, social media, and content marketing efforts and track how those changes make a difference.
But, like a science experiment, this relatively simple process can be thrown off with a single misplaced step – so follow these rules and get it right.
And you thought you escaped the scientific method when you went into marketing, didn’t you?
1. Record what you know, what you expect
Before you begin testing, note what variable you’re testing and what sort of impact you expect it to have. This enables you to track actual results against assumptions. Having this information can guide future tests.
Note: Do not do anything that will invalidate the test. If comparing the effectiveness of a green call to action button versus a red one, make sure the rest of whatever you’re testing is the same. Even a subtle change to content or design could skew the results of the test.
2. Don’t rely on a small sample
The larger your test, the greater chance you will get accurate results. Testing a subject line among 10 email recipients could lead you to false conclusions. The email with the new subject line may have gotten lucky, which is why it racked up a 90 percent open rate.
3. Try 10-10-80
Performing a test where 50 percent of people get one option and 50 percent get another may not always be the best option. Try a 10-10-80 test, especially when it comes to email. Split two options between 20 percent of your audience (10 percent each). Once you determine a winner, send the results to the remaining 80 percent and see if the results hold up.
4. Select groups cautiously
When possible, carefully select who sees the variable and who sees the control. However, be sure to eliminate any bias. For example, putting email subscribers who rarely open emails in the control list because you want the variable to work out leads to false conclusions.
It’s also not a best practice to choose people completely at random. It makes it difficult to track and could also lead to skewed results.
Select who gets which test by using the first letter of their last names or some other unbiased measure.
Want to optimize your marketing? Take a demo of the Vocus Marketing Suite!
5. Test, test, test
Even with a large sample size, one test may not provide the most accurate results. Test the variable several times before instituting long-term changes.
It’s also possible you can keep learning from a recent result. For example, if you find that shortening tweets from 120 characters to 100 got more clicks or retweets, find out what happens when you get it down to 90.
6. Document test results
In addition to helping you remember what works and what doesn’t, recording results can help others within your company implement some of your findings and, perhaps, do some testing of their own. The collected research will only make your marketing better.
7. Ask people about their decisions
Getting unexpected results? Find time to talk to test subjects on both sides of the fence. Their point of view could show why something works, allowing you to test extending that logic.
8. Learn from inconclusive tests
An inconclusive test is not a failure. It could mean that the variable does not play a significant role in producing the desired result. Or it could mean that both options are very effective. Keep testing to prove the results.
9. Don’t over-test
Testing a hypothesis too much can lead to inaccurate data, especially if you keep taking a narrower and narrower view.
Another potential problem with over testing is alienating your followers, visitors, or subscriber lists. The people who follow you are not guinea pigs. They follow you because you consistently offer something of value. If you mess with consistency, they may become annoyed and stop following you.