There’s a control for that: what your testing strategy is telling you
Marketing Automation | Email Automation | A/B Testing | Marketing Planning | Split Testing
Marketing planning can be exciting. It's often the time where new ideas get thrown on the table, innovative and analytical thinking converge, and rough concepts get fully baked.
What's not fun—seeing campaigns and approaches flop, particularly when so much thought, creativity, and goal intention went into them.
It's a very necessary part of our craft. Unfortunately, when it comes to marketing efficacy, fear of failure often gets in the way.
In fact, that fear can sometimes be so loud that it counteracts the notion that, behind every carefully executed plan, lies an equally as important testing strategy.
Those who understand the balance between planning, testing, and analysis are the ones who tend to reap the highest praises and metrics.
So how do you create a framework that incorporates testing into your holistic strategy?
Since testing one variable at a time is optimal, you'll want to build a cadence to your testing program. For instance, depending on what you're looking to accomplish, your annual testing program could look something like this:
Quarter 1: Email testing
- Audiences
- Subject lines
- Delivery Times
Quarter 2: Ad testing
- Audiences
- Messaging
- Retargeting
Quarter 3: Content testing
- Audiences
- CTA
- Format
Quarter 4: Apply learnings to plan
- What's working
- What didn't work
- What needs more testing
It's also important to follow some best practices for the most accurate insights:
- Only test one variable (campaign change) at a time
- Try to measure low-funnel results (as low as possible)
- Determine your control (original campaign) and variable (campaign you’re testing) to compare results directly
- Split each campaign equally and randomly between audience groups
- Test and run each campaign at the exact same time (unless you’re only testing time of delivery as a factor)
To obtain the highest confidence level in your results (otherwise known as statistical significance), you'll want to target larger audiences and make sure you evaluate results with context.
For instance, let's say you're testing a time of delivery/deployment between two domestic campaigns with all other elements staying the same:
- One is done in the first two weeks of September
- The other is sent out in the two weeks at the end of December
The results of the first one should be far better than the second one because most people are off work during the winter holiday season. Right there, your results are somewhat skewed and in need of context.
All of these statements can be consolidated into a testing hypothesis statement. For example, let's say that you think a campaign can generate more webinar sign-ups by changing the subject line to something more about the specific proposition of the webinar rather than a general advertisement of the event. Your full hypothesis could be something like:
“Changing the email subject line to ‘Ease yourself into a stronger cybersecurity posture with our next webinar’ will create more webinar sign-ups because it will be clearer and easier to understand the exact benefits the event will offer.”
This hypothesis is specific about the element you’re changing, why you think it will work, and what exactly you will measure (in terms of low-funnel conversions) to test your theory.
- Audience
- Time of delivery
- Email format
- Subject line
- Email image
- Message layout
- Content
- Copy length
- CTA
First, schedule out when the testing will take place and which campaign you’re testing, then establish what your A and B versions will include.
Campaign A should include the original elements while B should include the altered ones related to your hypothesis. Track which is which by writing out a description for each version.
This process will be ongoing, allowing you to continuously understand ways to improve and measure your email programming. All of your results will translate into findings, which will translate to takeaways, and eventually into new testable hypothesis.
This “rinse & repeat” process pairs well with behavioral data collection and analysis. Collecting this intel and refining is all a part of a sustainable data-backed email optimization process—a true testament that your marketing team, and your org, is moving into digital maturation.
About Nicole Crilley
Nicole is a digital strategist and content designer with 10 years of experience in email marketing automation, web design, marketing technology, user experience, and content production. With a versatile background in freelance, consulting, and corporate settings, Nicole specializes in identifying and implementing effective digital strategies.