I've written before about how site speed and performance is a major factor to search engines such as Google, especially when sites have an enormous amount of content.
It's difficult for a search engine to tell what the overall user experience is when someone clicks on a result, especially since they have so many possible combinations and new phrases that are used every day.
A search engine like Google can tell how much time a particular page takes to generate for a crawler. This time is taken as a factor in terms of a sites ability to produce a response in a reasonable amount of time.
Without knowing the exact formula, my view is just that, a view. It's safe to assume that Google uses an average of sorts per domain, which can calculate the amount of pages from a particular domain that can be hit at the same time. This is based upon the overall response time recorded during a crawl.
From what I can tell, sites that respond on average 500 milliseconds or lower tend to rank well within the Google search results. This, of course, is highly dependant on other ranking factors that are chosen. Additionally, I've seen that sites typically over one second on average tend to rank considerably less well at similar testing levels. This "demotion" tends to increase as the latency gets worse.
The same goes for a site, either large or small, disappearing for a period of time. In most cases, a search engine will see a sever based error message such as "500 server busy," a message that will be delivered with most throttling programs designed to keep a search engine from over crawling a site. A search engine such as Google will automatically assume that the Web server is overused and the users are seeing the same message.
Why would you want your user base to see this type of message while searching? Thus, in most common cases these results are removed from the index, until such problems are corrected.
During times of either configuration problems or simply an outage, a search engine may see one of the following: DNS is unavailable, Network is not available, 500s in a large volume, and if a server is setup incorrectly, it could see redirects to nowhere, all pages showing a 404 error (page or file not found), or it may just simply hang indefinitely without a timeout. Any of these problems can cause a site to fall out of the index, either very quickly or over time, depending on the situation or severity.
It's important as a large site to consider having multiple infrastructure backups. If you're dealing with more than 300,000 visitors a day, it may be necessary to have multiple data centers across your market. One of the best ways to ensure uptime is to use a service that caches your pages and keeps them hosted across the world, ensuring that a user or search engine can get to such data instantly. Panther Express is a very reliable service that has worked well for me over the past year.
It may also be beneficial to use a DNS service that will ensure your information is broadcast on a regular basis, thus if one of your servers goes out you will be covered (in most cases).
Make sure your operations team keeps your servers updated and online. Make sure they don't use a module such as "mod throttle" to keep spiders from overwhelming the servers. If they're hitting too hard, explore alternatives in your robots.txt files that the search engines can see. Or, in Google's case, go into your Webmaster Tools and turn down the crawl frequency from there.
Twitter Canada MD Kirstine Stewart to Keynote Toronto
ClickZ Live Toronto (May 14-16) is a new event addressing the rapidly changing landscape that digital marketers face. The agenda focuses on customer engagement and attaining maximum ROI through online marketing efforts across paid, owned & earned media. Register now and save!