Article

5min read

A/A Testing: A Waste of Time or Useful Best Practice?

A/A TestingA/A testing is little known and subject to strong discussions on its usefulness, but it brings added value for those who are looking to integrate an A/B testing software with rigor and precision.

But before we begin…

What is A/A testing?

A/A testing is a derivative of A/B testing (check out A/B testing definition). However, instead of comparing two different versions (of your homepage, for example), here we compare two identical versions.

Two identical versions? Yes!

The main purpose of A/A testing is simple: verify that the A/B testing solution has been correctly configured and is effective.

We use A/A testing in three cases:

  • To check that an A/B testing tool is accurate
  • To set a conversion rate as reference for future tests
  • To decide on an optimal sample size for A/B tests

Checking the accuracy of the A/B Testing tool

When performing an A/A test, we compare two strictly identical versions of the same page.

Of course, the purpose of an A/A test is to display similar values in terms of conversion. The idea here is to prove that the test solution is effective.

Logically, we will organize an A/A test when we set up a new A/B test solution or when we go from one solution to another.

However, sometimes a “winner” is declared on two identical versions. Therefore, we must seek to understand “why” and this is the benefit of A/A testing.

  • The test may not have been conducted correctly
  • The tool may not have been configured correctly
  • The A/B testing solution may not be effective.

Setting a reference conversion rate

Let’s imagine that you want to set up a series of A/B tests on your homepage. You set up the solution but a problem arises: you do not know to which conversion rate to compare the different versions to.

In this case, an A/A Test will help you find the “reference” conversion rate for your future A/B tests.

For example, you begin an A/A Test on your homepage where the goal is to fill out a contact form. When comparing the results, you get nearly identical results (and this is normal): 5.01% and 5.05% conversions. You can now use this data with the certainty that it truly represents your conversion rate and activate your A/B tests to try to exceed this rate. If your A/B tests tell you that a “better” variant achieves 5.05% conversion, it actually means that there is no progress.

Finding a sample size for future tests

The problem in comparing two similar versions is the “luck” factor.

Since the tests are formulated on a statistical basis, there is a margin of error that can influence the results of your A/B testing campaigns.

It’s no secret how to reduce this margin of error: you have to increase the sample size to reduce the risk that random factors (so-called “luck”) skew the results.

By performing an A/A test, you can “see” at what sample size the test solution comes closest to “perfect equality” between your identical versions.

In short, an A/A test allows you to find the sample size at which the “luck” factor is minimized; you can then use that sample size for your future A/B tests. That said, A/B tests generally require a smaller sample size.

A/A testing: a waste of time?

The question is hotly debated in the field of A/B Testing: should we take the time to do an A/A test before doing an A/B test?

And that is the heart of the issue: time.

Performing A/A tests takes considerable time and traffic

In fact, performing A/A tests takes time, considerably more time than A/B tests since the volume of traffic needed to prove that the two “identical variants” lead to the same conversion rate is significant.

The problem, according to ConversionXL, is that A/A testing is time-consuming and encroaches on traffic that could be used to conduct “real tests,” i.e., those intended to compare two variants.

Finally, A/A testing is much easier to set up on high traffic sites.

The idea is that if you run a site that is being launched or has low traffic, it is useless to waste your time doing an A/A test: focus instead on optimizing your purchase tunnel or on your Customer Lifetime Value: the results will be much more convincing and, especially, must more interesting.

An interesting alternative: data comparison

To check the accuracy of your A/B Testing solution, there is another way that is easy to set up. To do this, your A/B Testing solution needs to integrate another source of analytic data.

By doing this, you can compare the data and see if it points to the same result: it’s another way to check the effectiveness of your test solution.

If you notice significant differences in data between the two sources, you know that one of them is:

  • Either poorly configured,
  • Or ineffective and must be changed.

Did you like this article? We would love to talk to you more about it.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.