Over the past few months I’ve been revisiting some of the papers I’ve written during the course of my time in search marketing. I have to say, I’ve been fascinated by the response from, what I guess, is an audience that was too young back in early 2000s to want to read about information retrieval on the web and how that relates to SEO, and how websites that have the most links rank more highly and therefore are discovered more often – thus becoming “Filthy Linking Rich”.
I have to be candidly honest, too. I’m kind of entering into an SEO catharsis this year and taking whatever I think is left of that old wisdom and throwing it out there one last time. I always thought that “search engine optimization” was a silly term anyway. After 17 years online I still haven’t met anyone who optimizes search engines.
I’m having an ongoing conversation at the moment with Search Engine Watch Director Jonathan Allen. It’s not a 100 percent “SEO is dead” conversation. It’s more of a “people who call themselves SEOs probably do less SEO in their daily routine than anything else” theme.
Essentially, SEO was born out of a need to make technical changes to web pages so that search engine crawlers had better access to content. But now that a generation has passed and crawler technology is not as primitive as it was. Developers of content management systems, long the arch enemy of the crawler, build them with being “crawler friendly” in mind.
What’s an SEO to do? Not a lot really.
For many years I’ve been describing a phenomenon that I refer to as the “from nowhere to somewhere” effect. It has been perfected by the SEO industry. You take a website that has every known technical barrier to crawling and indexing, open it out, and then, along with the client, watch it soar… Somewhere.
But you know, you can only do “nowhere to somewhere” once. And then what’s an SEO to do?
Anyway, I got rid of most of that angst here back in 2008. I wrote a paper called “New Signals to Search Engines”, which focused on the evolution of information retrieval as opposed to SEO. Having tried so many experiments to see what worked and what didn’t (mainly on page stuff like H tags and other unnecessary SEO paraphernalia) I realized that number of macro environmental issues were having a major affect.
The Internet and the World Wide Web are different things. The web was invented to do one thing and we’re trying to make it do another.
Google had realized some considerable time earlier that they would never achieve the mission of bringing together the world’s information in one place. After having had sight of 1 trillion URLs, Google blogged that they would never likely crawl them in a timely fashion. And huge though 1 trillion is, it’s still just a fraction of the web.
There was a figure that came from a study saying that user generated content was beating the creation of mediated content by a factor of 5:1. No technology based around the current HTTP protocol is ever going to scoop that up in a timely fashion, let alone real time, which is the desire of the end user.
Crawling the web, insofar as web search is concerned, has hit a glass ceiling. But there’s a shift that compensates as users sidestep the browser in favor of the app. And in a few short years from now there’ll be a predicted 50 billion connected devices on the planet.
Anyway, I still believe there’s a lot of food for thought in this document. In fact, at the time I wrote it, I referred to it as a “thought paper” not a white paper or research paper. You can download “New Signals to Search Engines” here and see if it stirs any grey matter up at your end. It’s worth a conversation at least.