Judge: Google Must Give Up Some Data To Department Of Justice

Judge to Order Google to Give Up Some Data from the Associated Press covers the news that Google will be required to hand over some information requested by the US Department Of Justice. The DOJ has now asked for a much smaller set of data than it originally sought. It wants 50,000 URLs selected randomly from the Google index and 5,000 random search requests.

Geez, if that's all you need, guess it's confirmed you went overkill on the first request, eh? And what a nice spend of taxpayers money to contest this. I can think of better ways to get 5,000 random URLs out of the Google index and 5,000 search requests from other sources.

The judge in the case has said he intends to have some data, though what exactly remains to be determined. A final ruling will come "very soon" or "very quickly," the judge said, according to other accounts below:

  • Judge indicates Google must turn over some data, San Jose Mercury News, covering how Google conceded the new request is less burdensome on it plus more on the judge's concern that he wants to ensure no private data is released.
  • Judge to help feds against Google, News.com, covering Google's warning that it "could face hundreds of university professors [saying] 'I've got a study I'd like you to conduct'."

For background on the case, see these past articles from us:

Postscript: Judge will require Google to turn over some documents from USA Today quotes the US DOJ attorney saying of the much pared down request:

"We could perform the study. The study would be substantially improved if we had the Google data."

In my some of my articles on this mess, I've noted one of the frightening things about the US government's original request was how ignorant it seemed to be of the way search engines operate. It had all the feel of "give us data," not "give us what we need." Now we're told that 5,000 random queries will "substantially" improve a study where other search engines were forced to hand over what seems to be millions of queries? Drop in the bucket, anyone? And more important, it again illustrates how far-reaching -- for no good reason -- the original request was.

About the author

Danny Sullivan was the founder and editor of Search Engine Watch from June 1997 until November 2006.

To contact current Search Engine Watch editorial staff, please click here.