The Holy Grail of SEO

What’s the most common question to hear when pitching a new client? If you work in SEO, you know it instinctively: “How much additional traffic can I expect from your work?”

Wouldn’t it be great if there were a simple answer to that question?

The imprecise nature of SEO tends to put clients on edge. A reliable traffic prediction metric is truly the Holy Grail for search marketers. Even an educated guess correlating stronger ranking and optimization to improved traffic would put the client at ease, and also add some legitimacy to what we do. But even if we could predict keyword boosts based on search ranking, there’s still no guarantee that we cwould achieve the desired ranking based on implemented recommendations alone.

There are many variables that need to be considered when trying to gauge traffic increases — some of them obvious, some of them subtle. Let’s not fall into the pitfalls of Arthurian legend as we examine some of the fallacies in the quest for predicted traffic metrics.

It’s Starts with Search Volume

First, consider potential search volume. What is the universe of search traffic we hope to achieve if we receive every click for a particular search term?

There are several tools out there that try to do just that. Do a side-by-side comparison of their estimated volumes, however, and you may find volume discrepancies over the 100 percent mark.

Search volume is an ideal starting point for our quest, but it’s all downhill from here.

More Than One Search Engine?

It would be much easier if everyone used Google to search. But because there are several engines vying for your search query, a proper traffic calculation should be broken down by engine and market share for accuracy. For the sake of efficiency, we could use the published market share figures, but actually investigating the market share per the client’s analytics package would make the calculation that much more accurate.

Clicks by Position

Next, we need to know exactly how many more clicks the number three listing receives over the number four or number eight listings. It would be nice if the engines would publish a study that outlines the click-through percentage for the top 20 results, heck, I’d even be happy with the top 10. Again, this percent could vary from term to term by topic, words per search phrase, and any other level of specificity.

The click-through percentage per ranking position is a key piece of the data missing from our Holy Grail equation, and it’s further complicated by universal search. Now we need to know the percentage CTR each ranking gets and also how it’s impacted if an image or video shows up as one of the results. Don’t worry — it still gets more complicated.

Traffic from Implementation

You can lead a client to water, but you can’t make them drink. We can provide clients with as many recommendations as make sense, but if they only implement 50 percent of them, how can we predict what kind of ranking increases to expect?

What if they choose to only implement the page-based recommendations and never address global site issues that impede crawlability? Recognizing that it’s not always possible to implement every recommendation means that each small SEO change must be held accountable on its own.

We’d need to numerically predict the level of SEO impact per optimization action per keyword (i.e., will new meta data yield a rankings jump of one position and stronger internal linking yield two ranking jumps), which may require some kind of divining rod or crystal ball.

Still, I bet we’d see a significant increase in SEO implementation if we could provide the client with a semi-accurate estimation of expected traffic from fixing site issues.

The Social Media X-Factor

The proliferation of new social media tactics throws a pretty big wrench in the gears of traffic prediction. In the ADD cultures of social communities fueled by instant information and feedback, whims are subject to change without notice. Even Google trends can only show you topics of interest, not the current (or future) opinion on them.

Given its subjective volatility, how does social traffic factor into the equation? If a client’s article makes it to the front page of Digg and experiences the “Digg effect,” how do you predict traffic beyond the spike? And how deeply will it impact the SERPs? And for how long?

Even if everything else is tied up in a neat little package, the influence of social media will still throw the predicted traffic calculation out the window.

Golden Chalice or Fool’s Gold?

The easy part of estimating SEO traffic based upon rank improvements is identifying the key variables; getting the data to make the equation work is the roadblock. The search engines hold the key to a lot of this information, and if they truly want to make their results more relevant, they could make more of this data public to justify the need for optimization.

We’ve worked with some big name brands that are great experts in their field, but don’t appear in organic rankings. They simply can’t grasp the value they’ll receive from the optimization money spent. More traffic prediction data could help sway them to the side of best practices.

Of course, predicted traffic volume is much more of a wild goose chase than a divine crusade. Search engine market share, clicks per ranking, client implementation levels, social media — everything changes constantly, and the instability of one factor affects all others.

This is the nature of the new marketing landscape. Things are never fully predictable. We can make reasonable assertions, but an agency’s track record is ultimately the best measure of potential success. A thorough understanding of all the elements that can affect a search campaign provides much more value than an ethereal traffic metric ever could.

I’m not saying we should abandon the quest for this golden measurement; just that the journey to find it may be infinitely more useful.

Related reading

Simple Share Buttons