An InternetNews.com article named Peeking Into Google has Google's VP of operations and engineering giving us insight into Google's architecture. The article covers Google's hardware, the operating system on that hardware, and the auto-healing technology used on Google's servers. Also, the article describes how Google stores the data on the machines, and the action Google takes when a query is submitted. As you continue reading through the article, you also learn where the snippet of content comes from on the search results page and that search result page is then stored into memory. The Google VP also discusses the "Google File System", "Map/Reduce Framework" and "Global Work Queue" and each of its respective responsibilities.
Here are some key points from the article;
- Commodity servers for about $1,000 each built into interconnected nodes for complete redundancy.
- The operating system runs a "stripped-down Linux kernel" with patches bugs "that haven't been fixed in the original kernel."
- "Google has automated methods of dealing with machine failures, allowing it to build a fast, highly reliable service with cheap hardware."
- Google splits up Web pages into "shards" and then replicates them to several other servers, these servers are named "chunk servers."
- When the query is submitted by the searcher, the query is "split into chunks of service" where Google uses "one complete set of servers" to answer the query.
- The snippets of content used under the search results come from "document servers" which contain one copy of the Web page.
- The result page is then stored in memory.
- The Google File System is partly responsible for storing "two copies that are not physically adjacent -- not on same power strip or same switch," of "chunks".
- Client machines are used for "fault tolerance," if one fails the "chunks" should move to a different client machine.
- This is all managed with Google's "Map/Reduce Framework" which was designed in 2004.
- Google's Global Work Queue batches queries on machines to run "random computations over tons of data."
There is a forum thread on this article at Cre8asite Forums.