Writing ad copy is not always easy and often requires a lot of creativity because you want to deliver a unique message and persuade the searcher to click on your ad. There are different ways to enhance your ad copy (Sitelinks, Product Extensions, Call-to-click number etc.), but your message is still a key element. How do you know you’ve got great ad copy? You test it against another ad.
Usually ad testing is focused on CTR. You run a pair of ads simultaneously and watch which one will win with a given number of clicks and CTR. Of course, CTR is one the major factors to affect the quality score, but you ultimate goal is to generate conversions. Keep this in mind while testing.
(Note: always choose “Rotate” in ad serving settings. Do not let Google optimize your ad serving during the test!)
Ad copy testing requires some time. Yes, it is not enough to run a couple of ads for a day or two to decide which one performs better than other. Rather, you need to let it run for some time to collect enough data to confidently decide that one is significantly better than other. How do you find this? Well, if ad copy A has a higher click-through rate, it should be better than ad copy B, right? Not so much. What if one ad has received a higher number of clicks by some random chance? Is it enough data to exclude this random chance?
So, hold your judgment till you calculate the statistical significance of your tests. It is not that difficult as you may think.
First, what is statistical significance? In short, results are considered statistically significant if it is unlikely that they occurred by chance. Also, there are different levels of confidence in statistical significance: 80%, 95%, 99%, and 99.9% – each represents the reliability of the test results. You can read more on statistical significance on AdWords help page.
I created a simple excel Ad Copy Split Testing Significance Calculator to make it easier. Simply fill the yellow cells with appropriate data and choose the significance level.
Ad Copy A is an ad with a higher CTR (control split) Here is the example:
Two text ads run in an ad group. Each has close to 900 impressions. It seems that the first ad at 5.07% CTR is the winner. Do we have enough data to make a decision?
From looking at the statistical significance we can be only 84% confident that the CTR difference didn’t occur by a chance. Now you can decide if it is good enough for your program. FYI: in most cases 95% is a minimum accepted confidence level.
Same concept applies for landing page split-testing.