Insatiably curious blogger PotPieGirl stumbled upon an indexed copy of Google’s training manual for human URL quality raters in October. The 120+ page guide was more a reinforcement of search engine optimization (SEO) best practices than an earth shattering revelation of the inner workings of the organic algorithm.
The actual influence of these human raters is debatable; many believe they are used in troubleshooting and improving the algorithm, rather than having a direct impact on rankings.
Matt Cutts, Google Distinguished Engineer and head of the web spam team, tackled the subject in a Google Webmaster Help video blog yesterday, in response to a question submitted by SEO AJ Kohn. What did Cutts have to say?
“Raters Are Really Not Used to Influence Google’s Rankings Directly”
Human raters work under the Search Quality Evaluation Team and are used in the initial testing phases of proposed changes to the organic search algorithm.
“There are hundreds of raters who are paid to – given a URL – say, ‘Is this good stuff? Is this bad stuff? Is it spam? How useful is it?’” Cutts explained. Once those URL ratings are assigned, engineers can test proposed algorithmic changes within Google’s internal corporate network and have the new results evaluated.
That evaluation of new results should show whether the results tend to be better, according to Cutts.
“Raters Might Miss Some Spam or Might Not Notice Some Things...” and That’s OK
One of my previous posts discussed whether it was possible (and worth it) to try to optimize your site for Google’s human raters. At that time, a person claiming to be a Google rater had said in a forum that they processed between 30 to 60 of these evaluations an hour. They really weren’t going into any depth and were looking for more glaring issues.
As Cutts explains in his video post, this is just a part of the engineering testing process. Human raters don’t have to get it exactly right; there are hundreds of them working to evaluate URLs, giving Google insight into trends in proposed changes and the new results they produce.
If a proposed change makes it past these initial test phases, it is put to a side-by-side test, where two sets of results are pitted against one another and evaluated for relevance and improved user experience.
In side-by-side testing, human raters are assigned a query and two sets of search results in a sort of “blind taste test.” They simply indicate which set of results is better, in their opinion, and may leave comments for Google to consider.
You Just Might Be a Human Rater
After all of this internal testing, Google serves up the new results to a small percentage of users and use a variety of indicators to gauge user satisfaction. Human raters are “no substitute for the intuition and the experience search engine engineers have,” explained Cutts, pointing out that a change that rid results of spam may not resonate as the best set for the user.
“We do take the evaluation and the results of human raters, as well as the analysts who evaluate those results, very, very seriously,” he said. “We want to make sure that we’re launching a change that is, overall, a big improvement – ideally, at least an improvement – for users.”
“Those Ratings Don’t Directly Affect the Search Engine Results.”
Cutts tries to lay the rumor to rest that Google’s human quality raters have a direct influence on search engine rankings. Rather, they are a part of the massive testing process leading to some 500 or so tweaks to Google’s search algorithm annually.
If you can get your hands on a copy of the Google Human Search Quality Raters Guide, it’s still worth a read, if only to reinforce best practices in white hat SEO.
Meet Your Favorite Search Engine Watch Contributors
Many of SEW's leading expert contributors will be at ClickZ Live, the new online and digital marketing event kicking off in New York (March 31-April 3). Hear from the likes of: Thom Craver, Josh Braaten, Lisa Barone, Simon Heseltine, Josh McCoy, Lisa Raehsler, Greg Jarboe, Dan Cristo, Joseph Kerschbaum, John Gagnon, Eric Enge and more!