Criteo’s ad set A/B testing functionality enables you to test and understand the impact of changing various parameters of your campaigns to discover the best strategy to accomplish your business objectives. Our solution provides a simple way for you to measure the real impact of any control that can be adjusted, fully in self-service.
There are two things to keep in mind before launch:
All the ad sets that will be included in the A/B test need to be created and live or scheduled, not paused. We recommend scheduling the ad sets to start when the A/B test will go live
To maximize the changes of reaching statistical significance, the budget needs to be large enough to deliver a few million displays on each ad set.
A/B Test Set-Up
To set up a A/B Test, follow these steps:
Open Commerce Growth. From the left navigation bar, select Experiments > Set up A/B test.
Give your A/B test a name and select a start date and time. We recommend not setting an end date. This will allow you to make a data-driven decision on when to end the test after reaching statistical significance. It is not required to select a winner metric. This will simply set the default view in the analytics section and can be changed after the test is live.
Select the campaign and ad sets you will be using for the test along with the population split you would like. You will select 1 ad set as the control and then up to 4 ad sets as variations. We recommend splitting the populations evenly to best test each variation and reach statistical significance more quickly. If you do not see your desired ad set in the list, double-check that the ad set start date is before or the same as the A/B test start.
Review your selections and launch the test.
It can take up to 24 hours for data to show once your A/B test has started. To view your A/B test results, within the Experiments page, click on the eye icon of the corresponding A/B test you want to view.
There are several key features of the report:
The summary table provides an overview of the test setup and results to date. You can view the uplift for the selected winning metric and see if your test has reached statistical significance. Click on the pencil icon to add additional KPIs and metrics (clicks, sales, etc) to the table. All metrics available in our standard reporting can be added.
The bottom left chart represents the raw values of the selected winning metric along with the confidence intervals for each ad set. If the confidence intervals do not overlap, your test is statistically significant. This chart can be used for directional results if the A/B test has not reached significance.
Easily switch between Click Through Rate (CTR) or Conversion Rate (CVR) as the metric to analyze via the drop down menu in the top right corner.
Download your data by clicking on the “Share” button in the top right.
The bottom right chart shows the uplift percentage (difference between the variation and control for the selected winning metric) over time along with the confidence interval of the uplift. If 0 is included in the confidence interval, then the A/B test is not significant. This chart allows you to see how the uplift has changed over time as well as if the confidence interval is still shrinking. You can use this chart to understand if your A/B test should continue to run or be stopped:
If the uplift is not stable and the confidence interval is reducing, then the A/B test needs more time to reach statistical significance
If the uplift is stable and the confidence interval is stable, then the A/B test can be stopped