Search related patents provide insight into what’s going on in search engine algorithms, and search marketers who understand these “rules of ranking” are better positioned to win top position in search results.
A special report from the Search Engine Strategies conference, August 8-11, 2005, San Jose, CA.
Despite the early hour, the “Patent Files” session at SES San Jose was filled with bleary eyed algorithm chasers, the curious, and those looking for a “Reader’s Digest” interpretation of math-filled search patents. Experts from different disciplines offered a well rounded interpretation of the recent proliferation of search related patents.
What is a Patent?
Patents are formal documents that grant property rights for an “invention” or intellectual property to the inventor. According to the US Patent and Trademark Office (http://www.uspto.gov/) the right given in the patent excludes others from making, using or selling the invention without permission for a period of time, but it doesn’t require the applicant to actually use the invention themselves.
From a search marketer’s perspective, it is important to keep in mind that just because a search engine files a patent on a technique doesn’t mean the search engine uses that technique, nor does it mean all the features of the patent have to be explained.
Patent applicants are aware that competitors (or search engine optimizers) will carefully scrutinize patents for clues that could diminish the value of the patent. They may obscure the information or purposely add items in a patent with no intention of ever using them to pull the spotlight off techniques they are actually using or plan to use. Clouding the issue, offering up red herrings—all is fair in the Patent Wars.
Historical Data and Search Engines
Up first in the session was Rand Fishkin, CEO of the Seattle-based SEOmoz.org. Rand was unable to attend in person, but thanks to moderator Chris Sherman’s now famous “channeling” expertise (helped with a PowerPoint presentation with audio embedded), Rand was still able to walk the audience through his presentation on the Google Patent entitled, “Information Retrieval Based on Historical Data.” Rand covered Temporal Analysis of links, pages, and sites. An interesting description included in the patent was the concept that domain information could be considered when evaluating a site. Rand also described techniques that Google might use to detect spam.
Representing the academic world was panelist Dr. Edel Garcia from Mi Islita, talking about the 2003 Google patent related to “Detecting Query Specific Duplicate Documents.” Dr Garcia is both an academic and a search marketer, so while he used high math to explain special filtering techniques for detecting duplicate content, he also offered gold nuggets like reminding search marketers to be aware of the filters in their copywriting so they aren’t accidentally penalized.
Ani Kortikar, CEO of Netramind, took the “keep it simple” approach to explaining patents. Ani’s analogy to children’s stories was a fun-loving approach to what could have been a very heavy session. Ani provided an executive level summary of a number of patents recently filed by search engines. Especially noteworthy were his recommendations to take much of what is in the patents with “a grain of salt.” Ani did recommend monitoring your site’s rate of link growth, staying in good neighborhoods, and providing worthwhile original content as good practices that would please both your user and search engines.
The final speaker on the panel was search industry veteran Jon Glick, who is now Senior Director of Product Search and Comparison Shopping at Become.com. Formerly the Senior Manager of Web Search at Yahoo (and earlier with AltaVista), this search engine insider peaked audience interest as he dispelled myths associated with the recent search engine patents.
Jon opened his presentation entitled “The Patent Files: Revelations and Red Herrings,” describing patents as a “trade of disclosure for protection.” Jon then went through a series of key “disclosures” and provided insight on them from a search engine’s perspective.
The first “disclosure” Jon discussed was the notion that Click Through Rate (CTR) was a ranking factor. Jon stated that while CTR might be a great indicator of relevancy, it is easy to distort. The use of click bots and other techniques that artificially raise CTR are well known by the search engines. Jon warned that search engines were actually more likely to use high CTRs as a spam flag than as a ranking factor.
Another “disclosure” that Jon discussed was the concept that “the amount of time a user spent on a site” was a factor in ranking. The idea is that a search engine can track how long a user visits a site after clicking on a link from the search results. The length of time spent on the site is an indicator of the quality of the site. Users who immediately hit the back button may reveal a site that is clearly off topic (non-relevant) or a possible 404 page. Users who stay on the clicked site for a long time may indicate a good quality site.
Unfortunately, this technique as a ranking factor has a history of exploitation and search engines are wise to the various black hat techniques that artificially raise apparent “time on site.” Some of the more unsavory techniques include mouse trapping (disabling the browser’s back button) and endless pop-ups. Although it is a good marketing goal to increase the time a user spends on your site, search engines don’t give much weight to “time on site” due to the past abuse.
A recent hot topic in search engine forums is the “rate of change of link growth.” Glick confirmed that most search engines do monitor link growth and that a sudden spike in in-bound links could trigger a close review. There may be legitimate reasons for sudden link growth (think tsunami), but more often it signals artificial link manipulation.
Jon also provided a search engine’s perspective on whether changing the content on a site mattered to the search engine. Jon confirmed that engines do keep a history of the site’s content, but he added that the search engines most likely tracked this to determine crawl frequency, not ranking.
One note of particular interest: Jon mentioned that search engines often re-evaluate a site when it moves to a new IP address since this could indicate new ownership or a change in parked status. This revelation may make marketers think twice about casually changing their IP address if they are doing well in search engine rankings.
Jon summarized by saying that despite patent ownership that theoretically offers competitive protection to the holders, all search engines tend to use similar tactics in their ranking algorithms. For example, even though Google has patented “PageRank,” all major search engines use link analysis as part of their ranking algorithm. He went on to say that “there is a good bet that if you see something in use from a patent, it’s probably being used by other search engines or they are giving it serious consideration.” Jon’s final comment was, although these factors may influence ranking, “the core of all search engine ranking remains great content and great connectivity.”
Want to learn more about search related patents? Check out these other search engine patent related stories: New Search Related Patents and Keeping Up with Search Engine Patents. Search Engine Watch members also have access to the Search Patents category which currenty features more than twenty additional stories on the topic.
Christine Churchill is president of KeyRelevance.com, a full service search engine marketing firm offering organic search engine optimization, strategic link building, usability testing, and pay per click management.