Don’t Let Small Disconnects Lead to SEO Disaster

Nothing makes for crappier morning than when you grab a coffee, sit down to your analytical dashboard, and find out that you or your client’s organic traffic has hit the floor. Even worse, you next discover that the top ranking pages are now non-existent in the search engine results pages (SERPs).

Have you educated a client’s developer about the no-nos of search engine optimization (SEO) at a structural level? Let’s examine a few structural site elements that, without proper instruction, can lead to SEO disaster.


The first stop for any crawling search engine is the robots.txt file. Using this file can be a great way to exclude non-search engine worthy content from the eyes of the bots. But this file also has the potential to wipe an entire site out of the search engine’s indices.

From the most basic level, any instance of “Disallow: /” will leave the site completely invisible to search engines. It’s also important to note that any inclusion of top-level directories (i.e., “Disallow: /top-category/”) will force this category and all subcategories further down out of search engine view.

Sites often exclude their main image folder as well as their CSS and JavaScript folders in their robots.txt file. Never exclude your image folder — unless you don’t want to pick up any visitors from image search results.

For CSS and JavaScript, let search engines see this data. It lets them view the template structure and see that you aren’t hiding text or performing any hidden spam tactics. Any robots.txt revisions should be passed on for approval to the site’s SEO resource.

  • Check out your robots.txt file directly from your site, Google Webmaster Tools, or other SEO tools such as Optispider.

Robots Meta Tags

While it’s possible to exclude specific pages at the robots.txt file level, you also have the ability to exclude content and links at the page level via the robots meta tag. This tag can be placed in the head of the source code or anywhere in the page source with still the same SERP position swiping power.

Knowledgeable SEOs who want to pass or exclude link value while either providing or restricting content availability to search engines often use these specific tags. Much like the robots.txt file, this tag can do a lot of damage in the SERPs for your SEO campaign. Any usage of this code element should be passed on for approval by the site’s SEO resource.

Nofollow Code

Misuse of this HTML attribute isn’t as common as the two previously mentioned. However, this deserves mentioning because it falls into the overall category of accidently blocking the pathway or restricting access to search engines.

This tag is found appended to link code as [rel=”nofollow””. This is another tool that can be used to sculpt the flow of internal link juice. While this won’t have a devastating impact on SEO efforts, it can definitely be a hindrance if the tagging is placed and the SEO team isn’t notified.

  • Check out your nofollow tag usage at Open Site Explorer or the SEO Toolbar.

These three elements are widely understood and considered commonplace in SEO. Each of these elements can be utilized to help your SEO efforts if done right. So make sure that those who are involved in site maintenance efforts understand that misusing these files and tags can bring organic search referrals to dizzying lows.

Sometimes the simple mistakes are the ones that hurt the most. Assuming that all parties involved in an SEO campaign are as savvy as you could lead to SEO disaster.

Save up to $250! Register now for SES Chicago 2010, the Leading Search & Social Marketing Event, taking place October 18-22! Early Bird rates expire Oct. 1!

Related reading