Beginner's guide to A / B testing: converting email campaigns.
Beginner's guide to A / B testing: converting email campaigns.
Email campaigns and newsletters can be an excellent way to get repeat business, as well as new customers. You are already working with a somewhat pre-qualified base: these people have said they want to receive information from you. And many of them have probably already done business with you. And we all know that it is easier and cheaper to retain clients than to obtain new ones.This is why it is vital to run A / B tests when testing new techniques or formats for your email campaigns. Improving conversion rates here can make a bigger difference in your bottom line than many other marketing efforts, especially those with similar costs.
Here is the stalemate in our Beginner's Guide to the A / B test series. Decide what you will try
The first step in setting up an effective A / B test is to decide what you will test. While you may want to try more than one thing, it is important to try only one thing at a time to get accurate results. Things you might consider trying include:
Call to action (Example: "Buy now!" Vs. "See plans and prices")
Subject line (Example: "Product XYZ for sale" vs. "Discounts on product XYZ")
Testimonials to include (or include them at all)
The messaging design (Example: single column versus two columns, or a different location for different elements)
Personalization (Example: "Mr. Smith" vs. "Joe")
Body of text
Headline
Closing text
Images
The specific offer (Example: "Save 20%" vs. "Get free shipping")
It is likely that each of these things has an effect on different parts of the conversion process. For example, your call to action will obviously have a direct effect on the number of people who buy your product or click on your landing page. Your subject line, on the other hand, will directly affect how many people open their email first.
Think about this when you are deciding what to try first. If there are not many people opening their emails, you probably want to start with your subject line. You probably want to try the most important parts first. Your headline and your call to action will probably have a big impact on the conversions that the images you use or the text of your body. Try those things first, and then try the others in greater or lesser importance.
Test your entire list or just part?
In the vast majority of cases, you will want to try your complete list. You want to get an accurate picture of how your email inclusion list responds to your new email campaign, and the best way to do it is to try them all. However, there are some cases in which you may not want to try your entire list:
If you have a very large list, and the service you are using for the A / B test charges by the email address. In this case, try the largest sample you can afford and make sure the names you select are selected at random to get accurate results.
If you are trying something really extreme, you may want to limit the number of people who may see it, in case it is terrible. In this case, it's still a good idea to make sure that at least a few hundred people are watching each version you're testing. If you can try a few thousand people, even better.
If you are running a limited time offer and want to get as many conversions as possible, you may want to run a small trial batch first (a few hundred recipients) and then send the winner to your entire list.
The larger the sample of the test, the more accurate the results will be. Make sure the division is done randomly, too. The manual selection of recipients (or even the use of two lists from different sources) is an excellent way to skew the results. The objective here is to collect empirical data to determine which version of your A / B test material really works best.
What does success mean?
Before sending your email versions, it is important to decide what tests you will perform and what you will consider successful. First, look at your previous results. If you have been using the same email campaign style for months or years, you will have a good set of data to extract. If your historical conversion rate is 10%, you may want to increase it to 15% to start.
Of course, perhaps your goal with the initial A / B test is simply to get more people to open the email. In that case, look at your historical opening rate and then decide how much improvement you want to see. If you do not see that improvement with the first set of A / B tests, you may want to run another test, with two more versions.
Tools for the test
Most software email campaign has integrated tools for testing A / Campaign Monitor and MailChimp B. have both integrated tools like Active Campaign.
If the software in your email campaign does not have specific support for the A / B campaigns, you can manually configure one. Simply divide your current list into two separate lists and then send one version of your email campaign to one list and the other to the other. Then, you will have to compare the results manually, although exporting your data to a spreadsheet can help with this.
Analyze the results
Once you've run your email campaign with the two different versions of email, it's time to take a look at the results. There are some different categories of results that you will want to see:
The opening rate.
The click rate
The conversion rate once they are on your website
The reasons behind the follow - up of the first two is fairly obvious. But many people may wonder why we would want to track the conversion rate outside of email. Would not that be beyond the control of the email itself?
Yes and no. Ideally, the email you send does not have much to do with the conversion rate once the visitor is on your website. If an email leads to 10% of readers who click on your website and another leads to 15%, then the second email should generate 50% more conversions than the first. But that does not always happen.
It is important that the message you send in your email is consistent with the message on your website. If you are promising visitors a special treatment, and that treatment is not perfectly clear on your website, then you will lose customers. The same thing can happen if your email does not echo the appearance of your website. Visitors may get confused and wonder if they have landed on the right page.
Be sure to track your conversion rate version of each email to make sure you are not losing sales. The final goal here is conversions, not just clicks. You may find that one email receives more clicks than the other, but that it does not produce as many conversions. In that case, you'll probably want to run more tests to see if you can receive an email that not only results in more clicks but also greater conversions.
Better practices
These are some of the best practices that should be taken into account when running an A / B email test:
Always perform simultaneous tests to reduce the possibility that your results will be biased by factors based on time.
Try a sample as large as possible to get more accurate results.
Listen to the collected empirical data, not your instinct.
Use the tools available to you for a faster and easier A / B test.
Test early and try often for the best results.
Just try one variable at a time to get better results. (If you want to try more than one, look at the multivariate tests instead of the A / B tests).