In the first part of this blog series, we looked at methods to get the most of the now default ad type, Responsive Search Ads (RSAs). Now that you have your new ads perfectly crafted with search queries in mind, you might be wondering how to test them to see what actually works.
As Google has evolved and grown, so too have ad formats. These changes have also made it harder to control variables.
Back in the day, all paid media strategists had to do was a simple A/B test of one Standard Text Ad (STA) against another by simply changing one element of the ad. Expanded text ads (ETA) showed up offering the possibility of three headlines per ad, and suddenly testing got a little harder. Then RSAs came on the scene with potentially 43,000 ad combinations, laughed at us and said, HOLD MY BEER.
With so many ad combinations, how can you determine which asset is the top performer? To add insult to injury, Google has yet to release meaningful asset combination data beyond impressions. The platform will tell you the ‘performance’ of each asset as “Best”, “Good”, or “Low”, but that information is based on the number of impressions they received in each ad group and we we know, impressions aren’t conversions.
But all is not lost, Google has provided a few tools we can leverage to retain some control over the ad copy and help us set up meaningful tests.
[Are RSA’s too much for your business to handle? Head to our Paid Media services page to learn how we can help]
First and foremost, don’t try to test Responsive Search Ads (RSAs) against Expanded Text Ads (ETAs). They’re not the same ad format and Google tests them differently. As mentioned above, Google Ads has yet to release any real data from RSAs beyond asset impressions and combination impressions making it difficult to understand which headline combination works best for your account.
Furthermore, unlike ETA best practices, Google also has yet to share the optimal number of Responsive Search Ads per ad group. While three is the maximum number of RSAs allowed per ad group, Google seems okay with just one, and obviously, you need at least two for testing.
Lastly, keep in mind that testing can be challenging if you get fewer than 10,000 impressions per month. Make sure to use labels to identify your ads, and run a pivot table to determine the ad winner.
You’re probably already familiar with this type of testing from the now-retired ETAs (and their precursor, the STA).
The idea is simple: Test two themes against each other and let the market decide which it prefers. In the following example, we’re testing pricing against benefit statements.
Keep in mind, the idea behind the RSA format is to allow Google to mix and match assets, thereby providing the user with the best possible ad combination for their query. Consider pinning headlines or limiting the amount of assets to help control the test.
In the example above, the company’s name is pinned to Headline 1, themes are pinned to Headline 2, and there is the same call to action (CTA) pinned to Headline 3. This makes Headlines 1 and 3 the constants, allowing us to test the Headline 2 variables against each other.
The drawback, due to Google’s lack of insights, is that we don’t know which headline converted. Was it ‘Books from $1’ or ‘Hundreds of Books Under $9.95’? If you want more certainty, try ETA format pinning.
This type of test takes the responsiveness out of an RSA, forcing it to behave like an expanded text ad. The principle is simple, and if you are new to adopting RSAs this is also an easy way to transition your old ads into the new format.
Structure your ad just like you would for an expanded text ad. You have three headlines and two descriptions. Pin each asset in place. Don’t be surprised if you get a ‘poor’ ad strength score.
As noted in Part 1 of this blog series, after you’ve uploaded your new ad into the platform, Google will score your ad as one of four things: Poor, Average, Good, or Excellent. It may even give you suggestions on how to improve your ad.
Ad Strength has zero effect on your ad’s metrics. It does not indicate whether your ad is performing poorly or excellently. Ad Strength corresponds to Google’s ability to test your assets. Unpinned ads using all 15 headlines and four descriptions simply give Google more asset combinations to test.
Fully pinned ads (like an ETA) provide exact messaging, so there is no room for Google to test asset combinations.
Always keep your ads relevant rather than chasing “excellent” scores, especially if you have compliance concerns, are B2B, or require other specific ad messaging.
This is a total Wild West move, and if you have room for risk this might be the test for you. Test by pinning pools of keywords against unpinned assets.
In the Pricing Ad, Google has a pool of asset options pinned to each position from which it can draw. In the Benefits Ad-Unpinned, Google gets free-range to assemble the ad in any way it sees fit. Don’t be surprised if Google gives the unpinned ad an Excellent ad strength score and the pinned ad a Poor ad strength. Harken back to the Ad Strength note and soldier on.
Another testing method is to test pinning vs pinning the top impression combinations and then reviewing the metrics to see what Google is optimizing for.
Unfortunately, this is about the extent of reporting that Google is providing on asset combinations for now, and you can only view the last 90 days or select one month at a time. Regardless, in this test we’ll pick the ad with the greatest impressions from the last 90 days and test it against another pinned ad of our choosing.
In this example, we’ve identified the ad asset combination that has received the most impressions and we’ll test it against another ad with pinned headlines.
Keep all other assets the same in both ads. Summarize with a pivot table or use a predefined report by label to determine which performed best.
When reviewing combination impressions, your instinct is probably to replace or remove headlines with the least impressions. Due to the lack of insights, however, you don’t know why they’re not receiving impressions. There may be an underlying reason; for example, it could be a highly qualified B2B asset so it could be getting suppressed if Google is looking for a high CTR rate.
You can also use Google’s ad variation tool under Experiments. The tool allows you to test and iterate creatives through split testing. Note, however, this tool currently works at the campaign level only.
With the ad variation tool you can test one change across multiple campaigns by adding or removing assets, replacing text, swapping assets, pinning and more.
Of course, all the tests mentioned above are launching pads to creating your own tests. There are rumors that Google will start releasing more combination data in the future, but until then we’ll keep testing to make a measurable difference.
If you need help making a measurable difference with your business’ online marketing, reach out to Digital Third Coast to learn more.