Ad Copy A/B Testing for Statistical Significance is Easy! (UPDATED)

Writing ad copy is not always easy and often requires a lot of creativity because you want to deliver a unique message and persuade the searcher to click on your ad. There are different ways to enhance your ad copy (Sitelinks, Product Extensions, Call-to-click number etc.), but your message is still a key element. How do you know you’ve got great ad copy? You test it against another ad.

Usually ad testing is focused on CTR. You run a pair of ads simultaneously and watch which one will win with a given number of clicks and CTR. Of course, CTR is one the major factors to affect the quality score, but you ultimate goal is to generate conversions. Keep this in mind while testing.

(Note: always choose “Rotate” in ad serving settings. Do not let Google optimize your ad serving during the test!)

Ad copy testing requires some time. Yes, it is not enough to run a couple of ads for a day or two to decide which one performs better than other. Rather, you need to let it run for some time to collect enough data to confidently decide that one is significantly better than other. How do you find this? Well, if ad copy A has a higher click-through rate, it should be better than ad copy B, right? Not so much. What if one ad has received a higher number of clicks by some random chance? Is it enough data to exclude this random chance?

So, hold your judgment till you calculate the statistical significance of your tests. It is not that difficult as you may think.

First, what is statistical significance? In short, results are considered statistically significant if it is unlikely that they occurred by chance. Also, there are different levels of confidence in statistical significance: 80%, 95%, 99%, and 99.9% – each represents the reliability of the test results. You can read more on statistical significance on AdWords help page.

I created a simple excel Ad Copy Split Testing Significance Calculator to make it easier. Simply fill the yellow cells with appropriate data and choose the significance level.

Ad Copy A is an ad with a higher CTR (control split) Here is the example:

Two text ads run in an ad group. Each has close to 900 impressions. It seems that the first ad at 5.07% CTR is the winner. Do we have enough data to make a decision?

PPC tests statistical significance calculator

From looking at the statistical significance we can be only 84% confident that the CTR difference didn’t occur by a chance. Now you can decide if it is good enough for your program. FYI: in most cases 95% is a minimum accepted confidence level.

Same concept applies for landing page split-testing.

Point It About the author
  • Lisa

    So easy and simple to use. Thanks Serge! You can also use to test statistical confidence of conversion rate, but inputting clicks and conversions instead of impressions and clicks.

    March 4, 2011 at 8:33 am
  • I have been thinking on this if Ad A is control doesn’t it have enough historical data to favour the ad one over Test ad. Is A/B the only method to determine. in most of my cases the control always wins what do you think is driving that. your insights will be helpful

    June 28, 2011 at 1:10 pm
  • NS

    Great Post, wondering if you have any suggestions to measure statistical significance across ad groups, i.e. multiple ad groups and see which ad group is statistically most significant?

    July 19, 2011 at 11:07 am
  • Cathy Peterson

    Great and easy to follow post. Looking forward to trying out the calculator.

    April 30, 2012 at 6:07 am
  • Thank you for this. It’s just what I needed 😀

    May 11, 2015 at 4:51 pm
  • Andres

    Is there anyway to use this sheet to bulk calculate a/b ad copy across 100’s of line items?

    January 4, 2016 at 12:24 pm
  • Daniel

    It still works, great spreadsheet 🙂

    March 15, 2016 at 7:12 am

Leave a Comment: