Content Testing in Zaius
Zaius supports A/B testing of content, which allows you to send different pieces of content to different percentages of your campaign audience. You can then view campaign statistics for each content variation (e.g., opens and clicks) in order to hone your messaging to maximize engagement with your customers. Further, in the case of an automatic A/B content test, Zaius will send multiple pieces of content to small samples of your segment and then automatically determine the winner based on the criterion you choose, and send the winning content to the remainder of your segment.
What can be tested
Any number of full variations of Zaius content can be tested, i.e., you're free to change any message properties in an A/B test. For the sake of soundness of A/B test results, Zaius recommends not varying too many things, and Zaius recommends using a very small number of pieces of content (generally two). For example, just change the email subject line between two pieces of content. The content clone option helps for these use cases:
Initiating an A/B Content Test
In order to initiate an A/B content test in Zaius, simply click the
+ Add a Variation tab in the Messages section of the campaign creation screen:
You will then be presented with a panel of content testing options:
The configurable options are:
Automatic Testing vs Manual Testing
An automatic test is one in which Zaius executes campaigns in two phases: a test or experimental phase, and a winner phase. After the experimental phase, Zaius automatically determines the winning piece of content, and that winner content alone is then used in the winner phase. In a manual test, Zaius will simply send content to percentages of your segment that you specify, and not perform any winner determination.
Automatic tests for campaigns using recipient time zone
If you have enabled "Use recipient's time zone when available" in your campaign's schedule section, each UTC offset is tested independently of all others. Results per offset are presented in the campaign overview screen.
Test Duration, Winning Criterion, and Default Winner only apply to automatic tests:
This is the length of the experimental phase. For a campaign scheduled to run one time, the percentages specified in the slider at the bottom of the options will be targeted at the campaign start time. Then, after the test duration, Zaius determines the winner, and the remainder of the segment is targeted.
Campaign Audience in one time automatic tests
Note that the campaign segment is determined both at campaign start time (when the test phase starts), and again at the time of the winning phase. For example, if user A is not in your campaign segment at campaign start time, but is in the segment at the time of the winning phase, they will be targeted in the winning phase.
Zaius recommends a duration of at least 4 hours for an automatic test in order to allow enough time for your customers to act on the campaign (e.g., open and click emails) so that Zaius has enough data to determine the winner. In the case of a recurring campaign, any campaign runs from the campaign start time up to the duration are A/B tested, and thereafter the winning content is determined and used for subsequent campaign runs.
The metric that Zaius should use to determine the winning content. For email campaigns, the options are open rate (count of unique users who have opened the email, divided by sends), click rate (count of unique users who have clicked in the email, divided by number of sends), and click rate of opens (count of unique users who have clicked, divided by count of unique users who have opened). Zaius recommends open rate for a subject line test, whereas if variations A and B have the same subject but different bodies, click rate of opens would be more appropriate.
Zaius will only deem a piece of content to be the winner if the difference in winning criterion values for the variations is statistically significant. In the case that Zaius determines the difference to be statistically insignificant (or if the winning criterion values are equal), the test is deemed inconclusive and the default winner is used in the winning phase.
For a test between two pieces of content, Zaius determines significance by calculating a Z score of the two proportions of the test group that match the winning criterion. More formally, if content A is sent to na recipients in the test phase with the fraction pa matching the win criterion (e.g., opening the email if Open Rate is the win criterion), and content B is sent to nb recipients in the test phase with the fraction pb matching the win criterion, then
The Z score then factors in to calculate the confidence that pa and pb represent the true unknown difference in proportions that match the winning criterion within some margin of error. Zaius calls the winner statistically significant, and a test conclusive, if Z is greater than or equal to 1, which corresponds to a 68% confidence that the difference in pa and pb is significant.
If a content test has more than two pieces of content tested, the winner is statistically significant if it is pairwise statistically significant against all other content variants tested.
Content Selection Slider
This is where you specify the percentages of the campaign segment to send each piece of content to in during the test phase. More formally, if Message A is sent to 10% of the segment, each customer in the segment is targeted with probability 10%. For a one-time automatic test, Zaius recommends 10% for each message variation, or 5% if the segment is particularly large (in the millions), in order to increase the likelihood of the test result being statistically significant. Note that in the case of an automatic test in a recurring campaign, 100% of your segment is targeted for runs happening during the test phase, so the slider behavior is slightly different in that there's no withheld percentage:
Similarly, for a manual test, 100% of your segment is targeted and you can change the breakdown within. Zaius generally recommends equal percentages for each piece of content. However, different percentages may be appropriate if you're cautious about sending a certain piece of content to a large portion of your segment. Since the winning variation is determined based on rates (of opens, clicks, etc), winner determination doesn't require equal percentages.
Content testing is not applicable to API triggered push campaigns. Manual testing is supported for event triggered campaigns, but not automatic testing.