Top 5 Split Tests Every New FB Advertiser Should Run
If you have never advertised on Facebook before, you may feel a bit scared launching your first campaign when so many unknown variables could impact its performance.
This is why Facebook recently introduced a tool to give both new and experienced advertisers a more controlled way to launch their campaigns with a bit less worry and more insight potential.
The tool is called “Split Tests”. In marketing terms, it is a tool to launch A/B testing under the Facebook umbrella, whereas only one distinct element is being tested across multiple ad sets.
And because any Facebook campaign performance rests on a variety of variables (audience, bidding, placement, and creatives), being able to test these elements separately gives marketers a better understanding of which elements yield the best performance, and thus the potential for growth.
In this article we will provide the top 5 split tests every new advertiser on Facebook should start with:
Facebook will randomize your audience for a split test by creating non-overlapping segments, so the variable being tested will not be impacted by the audiences.
You will receive statistically significant data about the variable you are testing because Facebook will give each ad set an equal chance in the auction and will focus only one distinct variable that is being tested.
After the test is complete, you’ll get a notification and email containing results. Facebook will even provide a percentage of how likely you are to get the same results for the winning ad set should you run this campaign again with the winning set. These insights can then fuel your ad strategy and help you design your next campaign with more confidence.
Steps to setup split tests
From the Ads Manager Campaign view tab, click a green “+ Create” button to start a guided creation tool
Select your campaign’s objective
Check the box “Split Test” and name your campaign. We suggest using a standard naming convention such as [Account Name] | Split Test | [Variable tested] | [Campaign Objective]
On the next page, choose your testing variable from the “What do you want to test?” drop-down menu. There are four to choose from: Audience, Creative, Placement, and Bidding
Proceed with the campaign build as usual. Depending on the testing variable, you will be presented with a number of input fields to define a variation of the tested variable. For example: if you are testing the audience, you will have to fill at least 2 separate audience targets with their own demo/geo/interests/custom audience fields. All other variables for the test will remain the same including creative.
When choosing the schedule for your split test, make sure you set the start time at least 15 minutes away from when you are creating the split test. We recommend setting the start time a minimum of 30 minutes away just to give you enough time to finish and publish your campaign build.
Type of tests we recommend you start with:
Variable tested: Audience. Test on Funnel: Top/ Awareness
When you have very little insight into what is your target audience, you may start with a split test that will help you determine which audience profiles or lifestyle are most relevant for your brand.
We suggest comparing at least 3 types of audiences built via wide demographics (your geo, age targets, and gender) only, layering some interests or behaviors or relationship status or job functions that you may know from your research, and testing a lookalike list based on your email leads or past customers.
Depending on the optimization goal (linked to your funnel objective), you will gain valuable insights about your target audience for future campaigns.
Ad Set A- Wide Demo (Age and Gender)
Ad Set B- Wide Demo +Interests or Behaviors
Ad Set C- Wide Demo + Lookalike Audience
Variable tested: strength of Lookalike similarities based on a single source audience. Test on Funnel: Education (consideration) or Conversion
Ad Set A- Lookalike with 1% similarity
Ad Set B- Lookalike with 3% similarity
Ad Set C- Lookalike with 5% similarity
Variable tested: Animations vs. Static Ads. Test on Funnel: Education (consideration)
Users on Facebook respond differently to static, short form and long form animations. Certain creatives do well for prospecting, others may perform better for remarketing. This split test will help you determine which creatives maximize your audience response in the education stage of the funnel (also known as Consideration).
Ad Set A- Carousel Ads
Ad Set B- Explainer Video- Short Form (6-15 secs)
Ad Set C- Explainer Video- Long Form (30+ secs)
Variable tested: Short Form Animation vs. Long Form Animation. Test on Funnel: Awareness or Education (consideration)
You may want to know how well your animations perform for various funnel objectives. In the top funnel, you may expect the short form animation deliver better results since the audience is unfamiliar with your brand and long-form messaging will likely be skipped.
But if you are testing your short vs. long form animations on a consideration funnel stage where your audience is already familiar with your brand (you might have included Web visitors in the targeting), then your long-form animation hypothetically shall do better because a more in-depth explanation to a warmer audience is likely to engage with this form of content.
Split test these two creatives for various audience types to determine which one works better for which objective and funnel stage.
Ad Set A- Short Form Animation (up to 6 secs)
Ad Set B- Long Form Animation (10-30 secs)
Variable tested: Placement. Test on Funnel: All
Choose the following settings in your placement variable. Your audience and creative settings will be the same, so pick the ones that are most relevant for your optimization goal (Add to Cart, Purchase, Content View, etc.)
Ad Set A- Desktop Only
Ad Set B- Mobile Only
How long should all tests run?
Facebook allows split tests to run between 3-14 days. We recommend running every test for at least 5-7 days depending on the variable being tested and the budget you have.
It is important to understand that for certain optimization goals, it may take longer for the platform to determine a winning set because more impressions (and thus budget) will be needed to produce statistically significant results.
Recommendation for performance-based clients
If your primary goal is a deep-funnel conversion such as a long lead form submit event or a product sale, we recommend you focus your early stage split tests on more shallow conversion events.
For example, for your long-form lead acquisition objective, you may launch your first split test that focuses on Landing Page views instead of leads. By choosing this as your campaign optimization event, you will be able to get more data and learnings from your split test that may be focusing on placements or the type of creative.
For e-commerce clients, which ultimately optimize for purchases, we suggest to start split tests with an Add to Cart or Checkout Initiation optimization goal first. Facebook will optimize for a more shallow conversion in a funnel which may be obtained at lower CPAs, so your variable testing will produce statistically significant and relevant, actionable results for your future campaigns.
Once your split test is complete, Facebook will email you a summary of results indicating the winning ad set details. In addition, Facebook will include a probability of your campaign receiving the same results if you duplicate it and run again.
In other words, should you create a new campaign with a winning ad set variables, you will have a high chance of getting the same results. Success is never guaranteed, but this is a good starting point!
Here is what you’ll get in your email:
At Abacus, we were able to take the winning ad sets from various split tests and scale them by putting more budget behind. It did not happen for all of the split tests as some just did not produce desired CPAs in the first place, but this was very specific to performance-based campaigns where a determining factor was a cost per purchase.
For all other top and mid-funnel split tests, we discovered top performing variables that produced sustainable results in future campaigns. Wishing you good luck split testing! Scratch that. Wishing you good hypothesis testing instead. Luck won’t help here.