A/B testing is a standard practice for marketers. Yet, there is another breed of professionals in the world who do the same type of thing. Scientists. An A/B test is literally a scientific experiment. We want to find out if something will have a specific impact on the world. But, instead of testing on lab mice or in beakers, we use the internet and live consumers. Here are some things we can learn from our lab coat totting peers to improve our A/B tests and make them more effective.
Having goals for the A/B tests
Remember learning the scientific method back in school? First you create a hypothesis, then you run the experiment, collect the results, and compare to the original hypothesis. Then you lather, rinse, repeat. Well, A/B testing is simply marketing science.
But people jump into the experiment stage without creating that initial hypothesis. You should make an educated guess about what your change will do or what kind of result you want to get. This is the goal of the test, so you can see if the test was a success or a failure.
Try to be as specific as possible when outlining your goal. How much of an improvement do you think your change will have? Is it going to improve general traffic from a search engine, or improve your conversion rate on the page? Can it possibly have a negative effect on something else? Try to predict as much as you can so you can better track what happens.
Analyzing more than one data set
Proper A/B testing is only changing a single thing to test the impact it might have on a web page’s performance. The goal is to steadily improve a site by experimenting, collecting data, and analyzing to see the impact the change had compared to the original page.
A major problem is that while doing an A/B test, it’s easy to focus on only the page’s performance, specifically on a few data sets, and not the impact it has on the whole site. A popular one that’s easy to focus in on is page visits, as it’s easy to track and compare.
Other data sets need to be analyzed so you can get a better understanding the impact your change has. Each data set is like a tiny piece of a puzzle, and you need to put them all together to get a full picture of what they average consumer does regarding your change.
For example, let’s say you run a test, and it results in the number of page visits to skyrocket. If you stop there, you might think the change is a huge success and adapt it to the whole site. But let’s say another data set tells a different story, like your exit rate on the page went up too. Yes, more people are coming to the page, but more are leaving your site after viewing it.
You can do a wide variety of things with all of your data. You can diagnose what might have gone wrong, predict what people will do in the future, and guide what you need to do in the future to get the best results for your efforts.
Setting up a margin of relevance
Interpreting data can get a bit messy. A site’s data can have mild fluctuations from week to week without a single thing being changed. So how can you know your changes is what influenced the change, or just a normal fluctuation?
You need to set up a margin that those fluctuations can fall into. Usually, that margin should be about 10% of your average, but if your data moves either way more than that on a normal basis, feel free to make that margin wider.
To count the results of your test as valid, it needs to get outside of that margin. If it doesn’t, then you can make the deduction the effect of the test wasn’t large enough to make a difference, or had zero difference at all.
Different things work for different pages
So, you’ve done your test and collected your data and determined the results were significant. Now you can start applying it to the rest of the site, right?
Well, all you actually know is that it works on that one page. It’s safe to assume that those changes will work with extremely similar pages, but others might not take the change well. A landing page has a different goal than a blog post, and what works for one might not translate to the other.
What do you do? You set up another A/B test, but for other pages. That way you can see if the change will have a similar effect on those other pages. By doing this, you can prevent disaster for your site, and also detect trends in your visitor’s behaviors.
Another thought to consider is running the same test in the same scenario, but at a different time. If you can recreate the same results again, you know your change works and isn’t a fluke. It’s why scientists recreate their peer’s experiments. If it’s good enough for “regular” scientists, it’s good enough for us “marketing scientists.”
Different focuses to A/B test
You can A/B test practically everything, from the color of a background, to the format you present your content in. But you want to test with a purpose and know what you want to improve. Each page type has different goals and what you test needs to align to that.
The purpose of a landing page is to convert visitors to something, whether that’s simply to get a lead or make a purchase. You should focus your A/B tests on increasing that conversion rate. Other metrics are important to watch as they might impact your conversion rate, but stay focused on improving that one.
Business blogs have the purpose of bringing people to the site and pushing them along your marketing funnel. People don’t buy products from a blog post, but it should lead them eventually to a landing page. Your A/B tests for your blog should be on how to get more people to your site and how to keep them on your site, moving towards a sale.
Your site’s home page needs to be a functional hub to get people where they want to go. It also needs to quickly convey what your company does, and have a hook to keep them on your site. A/B tests should make it easier to get wherever people want to go and improve catching their interest.
If you send out any type of newsletter, ad, or update to tons of people through email, you can effectively A/B test them to learn how to improve your communication tactics. There are tons of ways to utilize email for your advantage, whether it is to keep people in the loop on what your business is doing, help progress them through your marketing funnel, convert customers into advocates, and more. Outside of standard email improvements, like grammar and images, you can do a lot to test with emails. The key is identifying what you want your emails to do, and testing to improve that result.
Do you have a tactic of having more effective A/B tests? What challenges do you encounter when trying to interpret your results? Let us know in the comments below.