SEO News
author-default

Search Research From The WWW10 Conference

by , Comments

A number of technical papers relating to indexing the web and searching were presented at the recent 10th International World Wide Web Conference, which was held from May 1 to 5 in Hong Kong.

Below, I've highlighted some presentations that seemed particularly interesting. Be warned -- many these are highly technical documents.

WWW10 Proceedings: Table of Contents
http://www10.org/cdrom/papers/contents.html

A full list of papers presented at the conference.

Rank Aggregation Methods for the Web
http://www10.org/cdrom/papers/577/

Discusses techniques to rerank results when combining data from different data sources, which among other things may help improve the quality of meta search results and help eliminate spam from search listings.

A Case Study in Web Search using TREC Algorithms
http://www10.org/cdrom/papers/317/

TREC is a long standing study designed to evaluate the effectiveness of information retrieval algorithms. However, do these tests evaluate web-wide search engines fairly? The authors of this paper try a new test, to measure how well an ideal TREC algorithm fares in a situation common in the real world of web search -- navigational requests. Results? Web search engines do better than the TREC algorithm. Well worth reading, especially to understand the great difficulties in assessing relevance.

When Experts Agree: Using Non-Affiliated Experts to Rank Popular Topics
http://www10.org/cdrom/papers/474/

Examines the "Hilltop" system, which can identify pages that can be classified as "experts" from within a large index. These expert pages are then associated with keyphrases, depending on their context. The result is an algorithm that produces high-quality search results for popular or broad topics from a much smaller set of web pages than used by the major crawler-based search engines.

Breadth-first search crawling yields high-quality pages
http://www10.org/cdrom/papers/208/

Finds that crawling links in the order that they are discovered -- "breadth-first" -- provides a fairly good collection of high-quality web pages when compared to a system that crawls in order of perceived page quality. Why care? Calculating page quality uses a lot of computational resources that breadth-first crawling doesn't require, which means breadth-first crawling may save money without greatly decreasing the quality of an index.

Intelligent Crawling On the World Wide Web with Arbitrary Predicates
http://www10.org/cdrom/papers/110/

Covers the idea of making a crawling system that learns how to find the best pages on a particular subject by examining the context of URLs (when available) and learning relationships that mark the type of pages it should collect.

Building a Distributed Full-Text Index for the Web
http://www10.org/cdrom/papers/494/

Explores new techniques and methods for building large indexes of the web.

An Adaptive Model for Optimizing Performance of an Incremental Web Crawler
http://www10.org/cdrom/papers/210/

Examines how IBM's WebFountain crawler attempts to keep its index constantly updated with changes from the web.

Finding Authorities and Hubs From Link Structures on the World Wide Web
http://www10.org/cdrom/papers/314/

Compares various link analysis ranking algorithms to discover how well they work to bring back documents and to find ways so that such algorithms can be measured against each other.

PicASHOW: Pictorial Authority Search by Hyperlinks on the Web
http://www10.org/cdrom/papers/289/

Describes a system that uses link analysis to bring back relevant image files in response to searches.

Placing Search in Context: The Concept Revisited
http://www10.org/cdrom/papers/431/

Examines a system to improve search requests by looking at the context of a web page that a searcher may be viewing.

On Integrating Catalogs
http://www10.org/cdrom/papers/076/

Explores a technique for automatically classifying documents.

You Are What You Link
http://www10.org/program/society/yawyl/YouAreWhatYouLink.htm

Link analysis can provide a means of easily determining what someone is interested in and who they associate with, when run against a known set of home pages.

Scaling Question Answering to the Web
http://www10.org/cdrom/papers/120/

MULDER's not a character from the X-Files but instead a system that processes questions in a way so that web search engines are more likely to respond with answers.

Clustering User Queries of a Search Engine
http://www10.org/cdrom/papers/368/

Examines how measuring clickthrough can reveal if two or more different queries are related to the same answer. This can both help identify documents that should be manually promoted by search engine editors and which queries they should appear for.

Learning Search Engine Specific Query Transformations for Question Answering
http://www10.org/cdrom/papers/348/

Examines a method to automatically detect question-oriented queries and translate them in a way to improve the results returned by web search engines.

Integrating the Document Object Model with Hyperlinks for Enhanced Topic Distillation and Information Extraction
http://www10.org/cdrom/papers/489/

Examines a proposed technique for refining link analysis so that results are less prone to being influenced by link farms, bogus links and other attempts to inflate link relevance.

Geospatial Mapping and Navigation of the Web
http://www10.org/cdrom/papers/278/

Examines ways that web pages can be geocoded and introduces a tool for browsing the web geographically.

Retrieving and Organizing Web Pages by "Information Unit"
http://www10.org/cdrom/papers/466/

If a web page doesn't actually contain the words you are searching for, a crawler-based search engine might not list it in response to your query. However, this paper suggests a solution might be to have search engines that list "information units." These would be combinations of pages. The advantage is that when several similar pages are combined, they may be better defined and thus easier to find.

Towards a Highly-Scalable and Effective Metasearch Engine
http://www10.org/cdrom/papers/494/

Examines a method for better understanding the data sources a meta search engine can tap into and when it should actually access these.


ClickZ Live Toronto Twitter Canada MD Kirstine Stewart to Keynote Toronto
ClickZ Live Toronto (May 14-16) is a new event addressing the rapidly changing landscape that digital marketers face. The agenda focuses on customer engagement and attaining maximum ROI through online marketing efforts across paid, owned & earned media. Register now and save!

Recommend this story

comments powered by Disqus