A/B testing is a valuable addition to the online marketing mix and is an absolute must to be successful online. This testing method has proven itself time after time, and the web is filled with articles about what A/B testing is, how to start, which tools to use and loads of successful cases. However, this does not guarantee success. You have to avoid many common pitfalls. To make sure you get the best out of A/B testing, you should avoid the following mistakes.
1. Testing the wrong things
What are you going to test first? There are thousands of elements and combinations of elements you can test, which enlarge the chance of being lost in all the choices and as a result you will not get the best result. The most important thing is to make choices based on a consideration between the expected impact and effort (time, money and capacity) it takes to realize the test. For this reason it is crucial to make a testing plan in advance. This plan should describe:
– What, why and how you are going to test
– Which are the key performance indicators you will use to decide the success of the test
– The expected duration of the test.
By writing down these points in advance you can make a well-thought-out decision and make sure you test effectively. Besides making a plan, you can do an impact-effort analysis. Click here to learn more about this analysis. Using this analysis, you can see whether it is more effective to test from back (where the purchase intention is maximal) to front. This means from the last page in the sales funnel to the shopping basket, and from the product detail page to the homepage.
2. Not thoroughly checking the test before you launch it
A/B testing is a great way to realize conversion, but there are also many risks to it. Before and after launching the test it is very important to check your tests for bugs. A bug in your test can lead to:
– False data, which means you can not use your test results
– The impossibility for visitors to take desired action
Besides wasting time, this can cost you leads or turnover. This can be avoided by:
– Doing a cross-browser check, in which you check whether your test works in the most used internet browsers. Do not forget to also check mobile browsers.
– Comparing statistics of your testing software and your web statistics software. Does everything work correctly?
– Monitoring your test results carefully; be aware of fluctuations and patterns. This way you can intervene when necessary.
3. Drawing the wrong conclusions
Analyzing and interpreting data is essential in the cycle of A/B testing. You start with this when enough data is collected to draw the right conclusions, and when there might be a winning version of your test.
Yet we still see that the wrong conclusions are drawn and as a result, wrong decisions are made. This can happen as a result of:
– Stop testing too soon. The more data you have, the more chance you have that your tested results are valid. Tip: stick to at least 100 conversions per version and a run-time of 14 days.
– Not segmenting the test results. By not segmenting the test result, you only see the “big picture”. This leads to overseeing the essential individual differences. Tip: link your testing tool and your analytics tool, so you can use the segmentation options to find the individual differences. Also check whether the measured results are dependent of circumstances, such as a different traffic source, browser, new or returning visitors.
4. Too much focus on conversion optimization
The conclusion you can draw based on the results will eventually lead to the decision to implement a certain version, to run another test or to take a completely new approach. No matter which decision, it is important to learn from your testing. Too many marketers are solely focused on optimizing conversion. This leads to overseeing important details and you will not get the best testing results. When a test does not lead to improvement in conversion, do not see it as a failure, but try to learn from it. This way you learn about your visitors, and that is very valuable. Besides that, you can also use A/B testing for proving that something does not work.