IndustryWhat’s it Going to Take to Beat Google?

What's it Going to Take to Beat Google?

These days, the most popular tech parlor game after guessing Google's IPO date is speculating what it will take to knock the company off its throne as web search champ.

These days, the most popular tech parlor game after guessing Google’s IPO date is speculating what it will take to knock the company off its throne as web search champ.

Editor’s note: The author was an AltaVista product manager from 1999-2001.

Search has changed dramatically since the early AltaVista days under Digital Equipment. The pure search and technology-centric engines left standing have transformed themselves into direct-marketing businesses. Too bad AltaVista was so busy thinking of itself as a portal while Overture was inventing performance-based keyword search marketing, and Google was laser-focused on relevancy, laying down the groundwork for its successfully relevant AdWord paid inclusion program.

So what’s it going to take to beat Google? Here’s a look at some of the critical factors search engines need to address to be successful in today’s environment.

Relevancy, performance, index, and ease of use

Relevancy, index size and freshness, performance, and ease of use are still critical factors, but they are no longer sufficient to predict future success.

Google’s claim to fame, with a share of about 55% of usage according to OneStat.com, came from delivering the most relevant results of all. However, the relevancy gap between Google and other search engines like Inktomi or AlltheWeb now seems to have faded. Most major engines are now very good at serving very relevant results given the few clues they can glean from the query, typically made up just of one or two search keywords. Are search engines’ results as relevant as they will ever get?

Performance, or how quickly browsers display the results, has become a non-issue except for sporadic downtimes. The size of the web is growing continuously, so are indices of the major crawler-based engines. Spiders are racing through the web, refreshing their content more often than ever.

Portals have poured money into usability studies to find out what users want their results to look like. There is not a one-fit all look and feel, but search engines seem to have finally figured out that non-relevant clutter is bad and simplicity is good.

To remain on the edge, search engines have to continuously push the frontiers of innovation.

Users’ intent and personalized relevancy

The subjective nature of users’ intent when formulating queries is complex. Understanding the context is challenging, but two important factors, location and time, are under-exploited by the engines today. A user typing the query [pizza” from Chicago or Milan, at 11:00am or 9:00pm local time, doesn’t expect the same results. Besides, the local advertising market is considerable and better geo-based services are critical to some wireless devices.

Training the engines

Search engines should learn from our behaviors. Users often search on the same topic, typing recurring queries. Users don’t always click on the first result and often navigate back and forth selecting links from a results page. Search engines should be more proactive, learn for the benefit of individual users and become smarter over time.

Think of it as training the engines. The more you use them, the better advice you receive. There are issues to address, including privacy concerns and how to handle multiple users of a single PC, to name only a few. But these challenges these issues pose can be overcome.

Query disambiguation

Emerging concept-based clustering technologies, used in search engines such as Vivisimo, are doing wonders at allowing users to refine ambiguous queries. For example, searchers can discriminate between computer virus and medical virus from a more generic [virus” query, but they still have to do the work after the results are served.

Disambiguation technologies have not been fully leveraged yet. Andrei Broder, Distinguished Engineer and CTO of the Institute for Search and Text Analysis, part of the IBM Research Division, explains that queries can be divided into three categories: informational, navigational, and transactional.

Some queries are clearly transactional, such as [iPaq H1910”; others are clearly informational, such as [muscular atrophy”, as others clearly are navigational, such as [NBA official site”. Some smart linguists could dissect and categorize these search terms and make better sense of the users’ intent.

Making results productive

Search engines should better follow through and make results more productive for users, notifying them of new relevant results for specific queries. Users often go to the same sites looking for new content. Search engines could monitor these changes for us. What a great opportunity for direct marketing businesses to establish that one-to-one marketing relationship, directly addressing users’ needs, and serve a relevant Google AdWord or Overture paid link in the notification email.

Vertical engines

The continued emergence of vertical search engines will increasingly fragment the market, eroding relative usage share away from Google. Crawling deeper through the invisible web, the indices of these topical engines have more depth for the subjects they cover. Findlaw.com, for example, allows users to retrieve proprietary content such as cases, opinions, and other legal reference material unavailable elsewhere.

Serving niche markets and a very targeted user bases, these sites are in a position to offer marketers better click conversion rates and, command higher cost-per-clicks than the general purpose engines.

Meta search engines

Meta search engines, such as Infospace’s Dogpile.com, Vivisimo.com, and Mamma.com have a very good shot at beating Google. Why wouldn’t the most relevant results from several of the best engines not be more relevant than the results of a single – even the best – crawler-based engine?

Meta search engines are not as technology-centric as search engines. But they have spent too much time trying to replicate single-source crawler-based engines and have been carrying a bad reputation for serving too many irrelevant paid links and obtrusive popup advertisements. Meta search engines should build on the differentiated and added value of aggregating results from the best sources.

Conclusion

The recent consolidation trends mean fewer players. Inevitably, these are also the fittest, with more negotiation power. Yahoo acquiring Overture would tip the balance of power, creating a formidable competitor to Google. MSN Search has certainly not played its hand yet. Redmond cannot feel too good about fueling Yahoo’s traffic and paid placement revenues.

More players and competitors will surface as more creative and sustainable business models emerge. Search players will increasingly focus on respective and distinct core competencies. Indexing the web is a complex task, as is researching smarter relevancy algorithms. Richer concept-based marketing tools will require more sophisticated skills.

A new model could very well emerge, where crawlers crawl and marketing firms target campaigns. Meta search engines could very well differentiate themselves providing real aggregation value, executing on relevancy and user experience, and emerge as the top search destinations.

Arnaud Fischer was an AltaVista information retrieval and search engine technology product marketing manager from 1999-2001.

Search Headlines

NOTE: Article links often change. In case of a bad link, use the publication’s search facility, which most have, and search for the headline.

FTC: Blame Foreigners for Spam
Wired News Jun 12 2003 9:40AM GMT
Espotting unveils new search partners
Netimperative Jun 12 2003 9:39AM GMT
Yahoo Shopping leads shopping/rewards sites, Hitwise reports
InternetRetailer.com Jun 12 2003 8:40AM GMT
Interrupting a Web Search to Take a Quick Call
New York Times Jun 12 2003 6:30AM GMT
Whither Netscape?
PC Magazine Jun 12 2003 5:35AM GMT
Search Engine Marketing (SEM) vs. Search Engine Optimization (SEO)
High Rankings Jun 12 2003 4:01AM GMT
Has Google Ruined the Web?
PC Magazine Jun 12 2003 4:00AM GMT
UK businesses fly .co.uk flag
Demys Jun 11 2003 3:00PM GMT
Yahoo retools IM for serious business
ZDNet Jun 11 2003 12:46PM GMT
Re-Inclusion of a Banned Site in Google
Search Engine Guide Jun 11 2003 10:48AM GMT
Yahoo getting clever about spam
Silicon.com Jun 11 2003 9:49AM GMT
Yahoo Personals aims at European union
ZDNet Jun 11 2003 8:26AM GMT
powered by Moreover.com

Resources

The 2023 B2B Superpowers Index
whitepaper | Analytics

The 2023 B2B Superpowers Index

9m
Data Analytics in Marketing
whitepaper | Analytics

Data Analytics in Marketing

11m
The Third-Party Data Deprecation Playbook
whitepaper | Digital Marketing

The Third-Party Data Deprecation Playbook

1y
Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study
whitepaper | Digital Marketing

Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study

2y