Best Practices for A/B Split Testing with email variants
Luminate Online comes with a valuable tool that will split of a small random sampling of an audience to deliver variants of a message so that the response to those variants can be used to determine the best combination of content and design to drive action from your constituents.
The way that the feature is designed to work is to determine at least one variable element in your email, such as the subject line, and create a variant of the original email with a different subject. For example, you could add personalization to one subject line, and have no personalization in the original message. It is important to only change one aspect of the message in each variant, as changing more than one aspect will make it unclear as to which change affected the response of the audience. Creating variants is outlined in To create a message variant.
Once you have created your original email and variant, you can then choose to set up your A/B Split Test Delivery. You have two options to get started:
- You can find your message and its variants in the list of messages in the messages tab under your campaign and click on “A/B Test” in the actions column
- You can click on “Run A/B Test” in the left hand column of the messages tab.
For example, if your overall target audience for the message will be 100,000 recipients, you should choose 5 percent for one variant and 5 percent for the other variant (given that you only have 2 versions of the message, otherwise you could split it more for 4 or 5 versions of the message if you wish to test this).
One common misuse of the A/B split is to take the entire audience of a send and divide it 50% to one variant and 50% to the original. The idea here is to take the results of this delivery and then apply them to future sends. Since time of send is a large factor in the performance measurement of an email variant, this method is not as accurate to drive email opens.
The most effective use is to start with a small portion of a larger audience:
The following example uses a base audience of 5MM to express the limited value of A/B Test on large audiences.
Email A sent to 2.5 million recipients has a 10% open rate
Email B sent to 2.5 million recipients has a 5% open rate
(250000 + 125000 ) / 5000000 = 7.5 % open rate
Email A sent to 10,000 recipients has a 10% open rate
Email B sent to 10,000 recipients has a 5% open rate
Email A sent to the remaining 4.9 million recipients in the total audience has a 10% open rate
(1000 + 500 + 490000) / 5000000 = 9.95% open rate
This rough example illustrates an estimated 122k more opens than a 50/50 split on the full audience.
In addition to being more effective on open rates, performing a 50/50 split on the full audience, especially when the audience size is over 500,000 recipients will result in a long audience calculation due to the complexity of the randomization of the audience. This audience build time can take several hours which may cause a delay in delivery of the messages.
For a faster 50/50 split send:
There may be a case where you want to divide a large delivery across a 50/50 split and you need to have the audience built more quickly. In this case there is the option of creating 2 queries that will split the audience in a close to random fashion. The queries built would use the System: Contact Range field to produce those group members that have contact ids that end in 0-4 to produce the first group:
System Contact range is 0-4 (Last 1 digits)
AND is a member of Group Email Recipient Group
and the second group is the other set of digits
System Contact range is 5-9 (Last 1 digits)
AND is a member of Group Email Recipient Group
Then use the “Use Query” action to create groups from these queries.
These two groups will be populated randomly (since the choice of contact id is sequential and not controlled by the admin or constituent), though the members of the split groups will not change (unless the population of the “Email Recipient Group” changes). Rebuilding these query built groups the day before the send will ensure that the audience creation will happen as quickly as possible. This method will require that any comparison will have to be done in individual reports on the deliveries as opposed to using the built in Variant Performance Report that will compare two (or more) variants within the same report.
Overall the tool is very effective when used to measure audience response, given that the best practices are followed it can be even more effective for your outreach efforts.
For a training video with step-by-step instructions about how to set up A/B testing:
If you would like additional information about how to set up A/B Testing, this training video will walk you through how to create message variants, run a report on the message performance and send the final message.