Auction Search Case Awaits Ruling

Auction Search Case Awaits Ruling

From The Search Engine Report
May 3, 2000

Is it legal to spider someone's web site without permission? We've never had a court ruling on this before, at least in terms of textual information, but that's expected to change shortly.

There's been a long-running dispute between auction site eBay and auction search engine Bidder's Edge. eBay claims the right to restrict who can crawl its listings, in part arguing that spiders can slow its service and that the information on its site is intellectual property entitled to protection. Bidder's Edge argues that the information belongs to those auctioning their goods and services, not to eBay, and that it needs no permission from eBay to index this information.

The case is currently before the US District Court in San Jose, California. The judge in the case has said that he's inclined to grant a preliminary injunction against Bidder's Edge, but how exactly he might restrict the service remains to be seen -- as is whether he actually will grant an injunction at all.

Should he do so, it would be a blow for Bidder's Edge but by no means an end to the issue. The case would still head to trial, where Bidder's Edge could win. It might also be successful during the during the legal maneuverings that occur before a trial.

Bidder's Edge, and those on its side, express concern that a victory for eBay could mean an end to search engines. After all, no major crawler-based search engine expressly seeks permission to index content. Instead, permission is assumed.

This brings up the issue of the robots.txt file, a long-standing convention to explicitly tell spiders to stay out of a web site. There's a strong argument that the greater good on the web is served by allowing search engines to operate in "opt-out" mode. That means that if you don't want your content indexed, you use a robots.txt file to "opt-out" of the process by telling spiders to go away.

By all means, eBay should certainly have a robots.txt file in place, if it wishes to keep spiders out. It doesn't, and has never had one in place any time I've checked since this dispute began last October. The lack of such a file shows little concern about being crawled. It's a fundamental mechanism the company should have in place. Moreover, eBay is well known to me not only to be listed on major search engines but also to have employed several search engine optimization firms to promote itself on search engines. This demonstrates no real concern that visitors should only come into the site via its home page, nor that its internal content should be protected from spiders. Instead, eBay simply seems to want to play favorites. Spiders that benefit it by sending the site traffic are apparently OK, but spiders it feels may be a threat to its business interests should be forbidden.

So this isn't just a case about spidering. It also involves whether some spiders can be selectively discriminated against. The robots.txt convention certainly allows this. You can exclude just particular spiders, and this was especially designed to stop "misbehaving" spiders that site owners felt were putting a burden on their servers. But the robots.txt file isn't a legal convention, which is why court cases like this one will be watched so closely.

For the record, even if eBay had put up a robots.txt file, Bidder's Edge says they would not have observed it. I find that disturbing, because I feel those who operate spiders should obey the robots.txt convention. It's one of the few solid rules we have involving search engine spiders, and it has helped make the entire opt-out indexing situation possible. In turn, that has benefited web users as a whole.

However, Bidder's Edge does argue that it would ignore such a file at eBay because the Bidder's Edge feels the content belongs to those placing the auctions, not eBay itself. Thus, eBay should have no right to limit the ability of its participants to be found. That's a powerful argument. It's akin to saying that those at GeoCities couldn't have their home pages found because GeoCities-owner Yahoo decided to block search engine spiders. I can imagine the outcry that would bring.

This brings us to the meta robots tag. It allows spiders to be blocked on a page-by-page basis, rather than the site-wide system the robots.txt file is designed for. Potentially, eBay could give all of its users the choice at the time they place an auction of whether they want spiders blocked from indexing their content. It's even possible that eBay might charge those who chose to allow spiders in an extra fee, which could be used to cover any real burden that the spiders might place on its servers.

eBay
http://www.ebay.com/

Bidder's Edge
http://www.biddersedge.com/

Search Engines And Legal Issues
http://searchenginewatch.com/resources/legal.html

You'll find links to other cases involving spidering and linking here.

Search Engine Features For Webmasters
http://searchenginewatch.com/webmasters/features.html

Information on blocking spiders with a robots.txt file or a robots meta tag can be found here.

eBay, Bidder's Edge face off in court
News.com, April 14, 2000
http://news.cnet.com/news/0-1007-200-1697820.html

Short summary of the current situation, with a link to a story about the Justice Department asking eBay about its actions.

Auction Dispute Centers on Question of Control Over Data
New York Times, April 14, 2000
http://www.nytimes.com/library/tech/00/04/cyber/cyberlaw/14law.html

Longer summary of the case, with quotes from both sides and third parties.

eBay vs. Auction Aggregators: A Freedom Fight?
InternetNews.com, Feb. 11, 2000
http://www.internetnews.com/ec-news/article/0,1087,4_302591,00.html

An older article exploring the issues involved.

Deep blocking
Economist, Oct. 16, 1999
http://www.economist.com/editorial/freeforall/16-10-99/index_wb6340.html

Another older article but still useful for explaining the concerns in the case.

Legality of 'Deep Linking' Remains Deeply Complicated
New York Times, April 14, 2000
http://www.nytimes.com/library/tech/00/04/cyber/cyberlaw/07law.html

Article about a different case between Ticketmaster and Tickets.com which touches on similar issues to the eBay dispute.

Ticketmaster Corp., et al. v. Tickets.Com, Inc
GigaLaw, March 27, 2000
http://www.gigalaw.com/library/ticketmaster-tickets-2000-03-27.html

Detailed information from the Ticketmaster case.

Copyright Decision Threatens Freedom to Link
New York Times, Dec. 10, 1999
http://www.nytimes.com/library/tech/99/12/cyber/cyberlaw/10law.html

Describes a case where a court rules against the ability for a site to link to pirated content.

Court Ruling Denies Copyright Protection For Images On The Net
7am, Dec. 21, 1999
http://7am.com/cgi-bin/twires.cgi?1000_t99122101.htm

Details on a case involving the spidering of images, where the court upheld the right to index.

Kelly vs. Arriba Soft Corporation
http://netcopyrightlaw.com/mediacoverage.asp

More information about the imaging indexing case above, with a current update on its status, fron the plaintiff, photographer Les Kelly.