Passer au contenu principal
Why and how can I run an A/B test?

Master A/B tests so that you can perfect your paywall strategy

Madeleine White avatar
Écrit par Madeleine White
Mis à jour il y a plus de 3 ans

What does an A/B test do? 

An A/B test is a marketing technique which works with statistics. When used in a digital context, an A/B test allows you to prove or disprove a hypothesis about the performance of an interface. 

A/B testing is a user-friendly optimization method that will help improve a site through repeating various actions. Once an A/B test has been launched, visiters to a site will randomly be shown different verions of the same web page. The idea is that you reach a compromise between being user-friendly and making the highest number of user conversions.  

Please note: The label 'A/B test' is a generic name given to any tests used, whether they are indeed an A/B split test or A/A test, multivariate test (MVT), etc.

Poool and A/B tests

In the context of driving your paywall strategy, it is very important to study the behaviour of your audience and understand what will convince a reader of your offers. 

  • Which subscription offer is the most convincing? 

  • Which scenario is the most effective? 

  • What type of message has the biggest impact? 

Poool's system allows you to create and run A/B tests for each of your reader segments. Consequently, 2 of your visitors from the same segment could find themselves faced with different a scenario journey without them even realising. For example, the number of steps in the journey could be different, the type of actions, the wording of the messages, etc. 

What are the best practices to follow when doing an A/B test? 

An A/B test is used to prove or disprove a hypothesis. However, a lot of marketers get false results when they do a test. But why is this?
Because running a test requires the application of an extemely rigorous process. 

Deciding on the right sample size

It is best to test various models on a large sample in order to get the most reliable results. In order to work out the ideal sample size based on a specific factor, there are various tools online that can help. For example, this Evan Miller website which can tell you in just a few clicks what the required sample size is for your test to make sure that it is meaningful. 

In order to reach a significant statistic index of 95%, you will need, among other things, a large enough trafic. Ideally this is a few hundred thousand visitors to your site. 

Chose the most relevant indicators 

We recommend chosing your KPI's in advance. In short, a variation could have a beneficiary effect on one factor but a negative effect on others. Therefore, it is important to have a clear vision of your objectives for the tests before you even begin.
If you are limited to completing a test on only a small sample, be sure to take this size into consideration when looking at consistent indicators. For example, it could be more useful to look at the trafic on your subcription offer page and the effect of any variations made rather than looking at the number of conversions. This way you have a larger number of results to analyse. 

Change the right number of things between two versions

Often, when you test a specific element on a page in the hope that it will have a positive effect on the rate of conversion, you can forget about a big problem that is hidden elsewhere. 

For example, changing the wording used on a Poool widget button is a good idea for a test, however, this won't be enough or may even be counter productive, if behind it is a conversion tunnel that isn't very effective. This is what we call maximum relative risk. This can be avoided by identifying the easily effected elements to test/optimise more than others according to your audience. For example, the position of the paywall on the page, the length of the subscription process, number of stages in the tunnel, the attractiveness of offers, etc.
Once these elements have been identified and understood, it is recommended to start with tests which you are pretty much certain of the probelem. This allows you to get all of your team onboard through proving the effectiveness of A/B testing. After gaining experience in testing, you can start to run tests with more and more drive. 

Don't run too many tests at the same time

With Poool you can create an A/B test for each of your audience segments which represents a great opportunity for you to optimise your paywall strategy for each user profile. 

However, please note that preparing and implementing an A/B test before analysing the results and drawing conclusions/actions to take is very time consuming. If you don't have the resources to do multiple tests at the same time, it's better to prioritise one test before others and work through them one-by-one. 

Furthermore, if you want an A/B to help you optimize your website, you mustn't forget about the possibility that your tests may have a negative impact on your website due to the fact that a section of your audience will be exposed to a variation that may be less effective. Consequently, the more tests that you have running at the same time, the higher the percentage of your audience who will be shown a variation of your site that is less effective. Just be sure to understand what running a test will mean! 

Don't leave your tests running for too long

Although you may need to leave your test running for long enough to get a reliable result, you need to also be aware that it doesn't last for too long. This could result in cookie data being deleted or expiring or increase the risk of you losing value on Google. A negative effect of your test could therefore have a real impact on your search engine optimization and/or reliability of the test itself. 

Take into account external factors when analysing results

Many external factors, that may change day-to-day, can have an impact on how people use your website. I they access your site during the week or at the weekend, during the day or at night, the type of news at the time, etc. You therefore need to stay aware of these factors and put any statistics into context before making any business decisions following the analysis of A/B test results. 

Document your A/B test

So that you don't miss out on any results gained in a test and to be able to share these results with your team, it is important to not forget to save any tests run as well as adding a description of what was done. For example, what was the hypothesis tested, what variations were implemented, what time length, what were the results, etc. 

Continue to optimise the website 

Once an A/B test has finished, it will lead to yet more new hypothesis to be tested. A marketing technique that you can't easily nor quickly out run! 

How and when should I stop an A/B test?

Analysing an A/B test is a very delicate task. The data will be sensitive and scientific, given the statistical method of the tests. 

Once you have found the right sample size (read above for more information), it is important that you understand the concept of the statistical significance index. This measures the probability that differences in results between variations aren't random. A marker of 95% is what most consider to be acceptable. 

Additionally, the duration of a test is different depending on multiple factors. For a sample size of about ten thousand users in a hundred conversions, a test could last for 2 or 3 weeks. Be aware though that you mustn't stop a test only because statistical significance index has reached or surpassed 95%. For example, if your sale cycle method lasts 1 month between the first reflexions and the transformation, it is important to take this information, specific to your company, into account.

Finally, be aware of the stability of your data. If the rate of conversion that you're measuring or the statistical significance index is still varying hugely, you will need to continue the test for even longer. This may be sure to changing things too much or because of the regression principle. Once your data has stabilised (and other numerical settings have been checked), all will come together ready for you to have reliable results in hand and be able to conclude your A/B test. 

This article by Kameleoon is also very interesting to read on the subject. 

The general process of running an A/B test

Having explained the process above, this is a simplified and general version of the steps to follow for an A/B test with an example below: 

  1. Analyse - A study of paywall usage statistics 

  2. Work out the problem - there isn't enough trafic in the subscription tunnel amongst the most loyal readers

  3. Form a hypothesis - the journey used for readers to discover articles is too long

  4. Define variant A (the original or control version) - a discovery journey made up of 7 steps 

  5. Define variant B - a discovery journey made up of 5 steps

  6. Run the A/B test - put the 2 journeys in action for 3 weeks on the audience segment for loyal readers

  7. Analyse the results - the variant B test generated 17% of trafic more in the subscription tunnel with a statistical significance of 95%. This means that the A/B test was reliable in 19 out 20 accounts

  8. Make decisions - Implementing the journey that out-performed the other (variant B) 

  9. Documentation - build up a document with the description of this test including such things as the length, settings, variations, results, etc. 

We hope that we have passed on all of the precious advice necessary for you to run an A/B test.
Please feel free to contact us on Intercom (the little blue bubble in the botton right hand corner) if you need any help!

Avez-vous trouvé la réponse à votre question ?