Testing has long been used by for-profit marketers to help figure out what works and what doesn’t while interacting with customers. Luckily, this same idea can be applied to the fundraising world – and as our landscape evolves more rapidly than ever, it’s important to incorporate testing into your strategy to optimize your conversion rates and increase user engagement.
A/B testing tests two versions of a variable element (a clause, call to action, type of personalization, ask string options, etc.). This is one of the best types of testing because it measures two communications pieces side-by-side and see which one performs better.
Many of us know we should be testing, but aren’t exactly sure of how to go about it. That’s understandable. But testing doesn’t have to be elaborate.But first, let’s break down why we all need to be testing in the first place – read on to learn about some best practices when getting started.
The short answer is that the small gains that are made from testing will accumulate over months and years and can result in big payoffs. For example – you test your sustainer ask string, and successfully increase the average donation by $5. That could translate into significant growth of annual revenue over time! By frequently testing different elements of your campaigns, you can find out what exactly works best for your organization and ensure that your fundraising efforts are as successful as possible.
Testing 101
The key concept behind testing is to fully vet an idea that you think will improve your strategy before it’s implemented on a permanent scale. When you’re deciding what to test, consider where you can get the most bang for your buck. Focus on conversion rates, average gift, and general traffic to your website, as these tend to have a direct correlation with revenue.
Think about the goal you’re trying to accomplish and what the test is supposed to answer. Focusing on only one variable at a time is crucial, as is making sure you have a way to track your conversion goal. Otherwise, how will you know if the test is successful or not?
Start with a testing hypothesis that lays out what you’re testing and what you want the result to be, like:
“Changing the website’s donation button to red will increase click rates to our main giving page.”
This statement summarizes what you’re testing and what you hope it will accomplish, with a clear way to measure if it’s successful or not.
Once you’ve decided on your testing hypotheses, you start with the two options – one as Version A, one as Version B. Version A is the original, usually known as the control. Version B is your variant, or whatever you’re testing. After running the test, you compare the results to see which performed better. If it’s statistically significant, you usually run the test again to see if you get similar results and then make the change permanent.
Statistically what?
Statistical significance is the way we prove that the test results actually made a difference – or made a positive impact on the specific metric that you’re tracking and want to improve. If something has statistical significance of 95%, that means there’s a 95% chance you’re right – but also a 5% chance you could be wrong. So, the higher the statistical significance, the more sure you can be of the result.
An important element to determine whether your results are statistically significant or not is to ensure that the sample demographic is both random and evenly split. Otherwise, there could be behavioral variance among those who were sampled. It’s also helpful to have as big of a sample size as possible, and run the test for a long enough period of time in order for your data to be as accurate as possible.
There are lots of options you can try both online and offline – think donation button placement, colors, CTA copy, copy length, personalization, and more. No test is too small to try out, as it will always result in some learning for you and your team in how to best reach your audience.
Live & Learn
Once you’ve gotten the results from your test, document them! So you and your successors don’t keep reinventing the wheel,, log all of your tests and archive the results so that you’re able to keep track of what you’re learning from them. So if two years down the line, someone asks, “Why don’t we have a video on our donation page?”, you can show them that it wasn’t helping to increase donations.
Keep in mind A/B testing doesn’t create massive change overnight and it’s not supposed to! It’s a way to see how small changes to your marketing strategy can help you reach your goals over time. It can also help you better understand what resonates with your audience, and make all the difference in meeting (and hopefully exceeding!) your fundraising goals.
Have you tested anything lately that’s made a major change in your strategy? Drop a note in the comments below.