This new service takes Google results and shows them in a random order. Why? The sites creator, Tasila Hassine, is trying to make a point. She writes:
This tool touches upon several crucial issues on the web such as Search Engine Optimization. Shmoogle instantly neutralizes Page rank and the whole SEO industry induced by it. Yet it addresses other fundamental issues such as retrievability vs. visibility. While all pages on the net are equally retrievable, they are certainly not equally visible.
Hassine has a good point and one that I make quite a bit in my presentations and classes to both librarians and the general public.
Just because it's "on the web" and has been crawled by a web engine doesn't mean that it's easily retrievable/visible. As I've said before, the Invisible or Deep Web in 2005 is every result beyond number 6 or 7. (-:
Why is this an issue? Here are just a few reasons that come to mind:
++ Keywords Selected
You use the term "pop" but the perfect result uses the word "soda."
++ Number of Keywords Used by the Searcher
++ Effort and Time
Searcher takes what they find during the first search and does nothing else to possibly improve their results. They also want it "all" in just a few seconds.
++ Lack of Searching Skills
Like I've said many times, people don't use most of the tools engines offer to create more precise results. I'm not just talking about advanced search resources but also the fact that many of the large engines offer specialty tools like image, news, and discussion search. Most of the engines will tell you that the click-through rates on these services are very small as compared to the primary web engine. Udi Manber said a few months ago that search engines are not mind readers. He's right and a little bit of education about search could go a long way.
++ The Searcher Doesn't Look Past the First Results Page
Many more results are available but so what?
++ Search Engine Overlap
Different results at different engines but does the searcher look in more than one place?
++ Of course, another issue is that the data itself is just not on the open web. It might be available "via the web" but the searcher doesn't know where else to turn to find it. Again, a specialized database might have just what they are looking for. Yes, sometimes these databases cost money but many times people don't have any idea that they have free access to these fee-based services from home or office.
These reasons and many others are why I think we've seen lots of interest in verticals in the past few months. Many times, assuming the searcher knows about the resource (here comes marketing again since people can't use what they don't know about), a searcher can get a good if not better answer in a shorter amount of time by searching a smaller, focused database.
A study published by Outsell last month pointed out that searchers in the workplace are "shifting away from their Internet research methods from just four years ago" and relying more on other sources including librarians and their intranet.
I often wonder if making large web engines larger with more content will make everything easier versus keeping things in small, focused databases and using meta/federated search technologies to (if needed) search disparate databases simultaneously using a single interface. Databases to help with database selection would also be possible.
Dynamic clustering, what Clusty.com provides can help the searcher quickly surface results that they might not see on the first page of results. In a white paper from Clusty's owner, Vivisimo, they argue that their technology can provide "a selective ignorance." Personalization based on a users preferences and their past search behavior can also be of assistance in helping material become more visible.