7 Tips for A/B Testing at Low Traffic Websites

Michal Pařízek
6 min readMay 18, 2016

--

One of the questions that I have been asked during my CRO trainings most often was “Can I do A/B testing at low traffic/conversion websites?”

A/B testing is a methodology which is based on statistical formulas. And to have a trustworthy A/B test results you need big enough sample size.

How big? The required size depends on the three parameters:

This is how the reality looks for a lot of websites:

Current Conversion Rate: 2%
Expected improvement: 10%
Confidence level: 95%
Sample size: 39,488 users per variant

80k users (40k for each variant) could be traffic for the whole quarter or even a year. In the Czech Republic, we have hundreds of small or mid-sized e-commerce projects which face this entering barrier of A/B testing.

And even in Avast, the global antivirus company I work for, we sometimes face the low traffic (conversion rate) issue. Particularly at small markets or minor business flows which on the other hand are ideal for courageous experiments.

However, there are several ways how you can tackle this issue and design better A/B tests even for low traffic websites.

1. Think big

You must unleash your creativity and make the alternative (challenging) experience different — not just different colours or slight changes in copy.

Think about a new value of proposition or a new pricing model. Think about a new layout, a completely different tone of voice, a re-structured product portfolio. Think big.

It has a crucial impact on required sample size:

Current Conversion Rate: 2%
Expected improvement: 50%
Confidence level: 95%
Sample size: 1,871 users per variant

39,488 users per variant vs. 1,871 users per variant → over 20 times less! The bigger difference in both variants’ performance the fewer samples you need. It’s statistics.

A good example of a radical change — a different layout, product options and also a copy.

The downside of a “Think big” approach is that you can’t easily find the pivotal element — the element which had the biggest impact on the fact that either A or B won.

2. Forget about MVT and A/B/C/… tests

Every testing variation prolongs test duration significantly.

Let’s say your average weekly traffic is 2,000 visitors. If required sample size per variant is 1,871 like it is in the example above you will 2 weeks to get results in A/B test, 3 weeks to get results in A/B/C test, and 4 weeks to get results in A/B/C/D test.

If your A/B tests run more than 3–4 weeks the risk of sample pollution increases. There is a higher chance that results got affected by external factors (holidays, campaigns, technical issues). More people can delete their cookies which are in control of what variations your users see.

In 2012, Econsultancy released a report (based on 1,600 online respondents) that found that 73% of respondents regularly manage their cookie settings using their browser. When asked what they would say if a website asked for their permission to set cookies when they visit, only 23% responded yes (60% responded “maybe”).

So, if your planned A/B/C test requires 4 weeks of running, split the experiement in two A/B tests. The quality of the data will be much better.

3. Chose a micro-conversion as the main KPI

In industry jargon a micro-conversion is an event which preceded a macro-conversion such a transaction. Micro-conversions don’t always lead to macro-conversions but they are signals that those sessions with micro-conversions might eventually end up macro-converting.

The typical micro-conversions events include: Add to Cart, Product Video Seen, Newsletter Sign-up etc.

The good news is that you usually have much higher amount of micro-conversions than macro-conversions and thus higher “micro-conversion rate”.

And the higher conversion rate you have the fewer samples you need for A/B testing:

Current Conversion Rate: 10%
Expected improvement: 10%
Confidence level: 95%
Sample size: 7,218 users per variant

However, don’t forget higher micro-conversion rate doesn’t necessarily lead to more macro-conversions!

This Avast Call to Action A/B test is a nice example:

Version B does show the price below CtA. Version A does not.

Not showing the price (A) before entering the checkout led to a higher click-through rate. But in the end, it did not lead to a better conversion rate as a lot of people abandoned the checkout which followed the green button. The clear winner was a version B.

4. Execute site-wide A/B tests

When your website doesn’t have a lot of traffic don’t limit your A/B tests to specific sections or user flows. You must include the maximum amount of traffic your website gets.

Typical testing use cases may include header and footer tests, checkout tests (if the checkout is the end of your conversion funnel) or website layout tests.

The downside of site-wide A/B tests is a potential scope of such tests. As mentioned earlier in the post you should think big and have very different testing variations to get bigger lifts. However it is challenging to think big in site-wide A/B tests — it would almost mean to test two completely different websites.

Nevertheless, keep in mind to include as much traffic as you can to have results sooner.

5. Boost your A/B test traffic with campaign traffic

When you need to run an important A/B test you may considering boosting temporarily your traffic volume with a campaign (PPC, email, social etc.).

It is a legit way how you can increase your volume and therefore shorten the A/B tests length. However, it is also dangerous. The results of such experiments will be tightened to the extra traffic you temporarily gained.

It means that the winning experience may turn to be a losing experience within your usual traffic volume and quality.

A post-experiment analysis should follow every A/B test (to analyze if the results are visible also after full-scaling the winning experience). But such an analysis is even more important in A/B tests boosted with campaign traffic.

It is the same with A/B testing at times of Christmas, Black Friday and at any other local shopping holidays. You should avoid testing during these times as you usually get a different type of traffic than during rest of the year. You may take advantage of higher volume to run some A/B tests quicker but be very careful with evaluation and data interpretation.

6. “A/B test” by using 5 Seconds Test

5 Seconds Test is a simple usability tool made by Usabilityhub. You upload there your website mockup or screenshot, add few questions and real people will then see it for only 5 seconds and then answer your questions like:

  • What does the service do?
  • What was the promotion in a top down corner about?

It is an excellent tool to check if people understand the value proposition correctly.

Oli Gardner at Marketing Festival 2015 showed a nice trick how, by using browsers’ Inspect feature, to quickly modify website code and create an alternative version. Then you take a screenshot and upload both original and alternative versions to 5 Seconds Test and get an instant feedback on both of them.

Indeed, it is not real A/B testing but I find this method very useful and I believe it can help you to shape your ideas.

7. Do “B/A testing”

When there is no way how to conduct an A/B test in a reasonably short time please do at least “B/A testing” — Before/After testing.

Make sure you implement the new design in a performance-steady time period and then (after implementation) monitor very closely the performance.

One of the biggest advantages of A/B testing is the fact that both variations run at the same time so external factors tend to be minimized. This is indeed not the case of B/A testing. However, sometimes this is the best and only way how to implement a change.

Whenever you implement a change without A/B testing it first set your KPIs ahead, and then thoroughly analyze the impact.

I hope you find those tips useful. First, it is essential to know what amount of traffic you need to run an A/B test on your website. There are many sample size calculators online: by Evan Miller, by Optimizely, by RJ Metrics etc.

And second, I believe it is wise to be more knowledgeable about what parameters determine the required sample size and how you can influence it. Last but not least, how to tackle the issue of A/B testing at low traffic websites.

Happy A/B testing!

--

--