You’ve got your results from your first batch of RankScience A/B Enhancements – now what?
With SEO testing, creating testing variants and rolling out your experiments is only part of the battle. At RankScience folks often ask us: “My RankScience SEO Tests are complete… now what do I do?” The answer is: Analyze, Hypothesize, and Iterate.
When an SEO Test concludes on RankScience, it will look like this.
This is your enhancement Leader Board where you’ll have your projected winner along with the percentage click uplift data. Please note that this is a percentage quantifier drawn from the relationship between clicks and impressions. As such, you can think of it as being related to CTR on Google Search Console (both metrics equate to more clicks!).
Now, when analyzing the SEO Testing Leader Board, we always emphasize to people that it’s not so much identifying a winner, it’s analyzing the distances between percentages across Recipes (please note: “Recipes” are the variants you’re testing – where “Recipe A” is Variant A).
For example, in the RankScience Leader Board above, the percent difference between the Winning Recipe and the Control is approx. 2% – 3%, which is a pretty solid indication that these results are offering some valuable insight, but the difference between the winner and the Control could be a bit more definitive.
In addition, if you analyze the results further, you’ll notice that “Recipe C” and “Recipe D” are actually quite far behind the Winning Recipe A and Control.
For comparison, let’s analyze the following RankScience Leader Board as well from a different SEO Test.
By contrast, you’ll note the differences in percentages between the winning Recipe A and the Control is much greater – approx. 8%.
In Example #1, while Recipe A is slated as the winner, its percentage difference from the Control isn’t as wide as we would prefer it to be. In addition, there are two recipes that have severely under-performed. Let’s say that in this testing environment there are 100 URLs which the SEO Test has rolled out on. That’s 5 Testing Variants (one control | 4 Recipes) at 20 URLs per group.
Therefore my hypothesis would be that if I were to factor out the two under-performing Recipes, then I’m giving my winning Recipe A and the Control a greater pool of URLs to be tested on (50 URLs per Variant), which can yield a more dramatic percentage difference between the two variants (and as such improve confidence in my winner).
In Example #2, with such a strong percentage difference between the winning Recipe A and the Control, this in an indication that we can attribute enough confidence in Recipe A being the clear winner here.
For Example #1, we would remove Recipe C and Recipe D, and then re-deploy the SEO test for another reporting cadence (which is approx. 28 days). If after that time the winner remains the same as does the percentage distance between the two (approx. 2%-3%) then it’s a judgement call about whether to declare that Recipe A an overall winner.
In Example #2, Since I’m confident in the Winner of this test, I would roll it out 100% to the directory/group of pages that I was testing on. This can be done by initiating a “Change” Enhancement from your RankScience Dashboard.
SEO testing keeps you ahead of the curve
The reality is that running an SEO test is never just a one-time occurence. SEO is always changing; user-search psychology is always changing. As such, you have to gather and interpret as much data as possible to help you make the right choices for your site’s growth. By treating the above method as a cyclical method of approach SEO Testing, you’re keeping your organic optimization efforts as lean as you possibly can so your pages are maximizing on all the potential SEO traffic they can net.