Ajax and Search Engines

The Google Webmaster blog has a post today about how to develop search engine friendly sites, even though they use Ajax.

One of the things that the post reinforces is that we still live in a world where crawlers have problems with Ajax and Javascript. They do include a nugget on the optimum way to help Ajax and static HTML links coexist:

When creating your links, format them so they’ll offer a static link as well as calling a Javascript function. That way you’ll have the Ajax functionality for Javascript users, while non-Javascript users can ignore the script and follow the link. For example:

foo 32

Note that the static link’s URL has a parameter (?foo=32) instead of a fragment (#foo=32), which is used by the Ajax code. This is important, as search engines understand URL parameters but often ignore fragments.

Basically, the code fragment above presents a static link to the search engine, but still follows your desired Ajax path through the use of the onClick function.

Our own advice to people who want to build tools, or have other interesting uses for Ajax, is that it’s a great thing to do. In the increasingly social web, usability and site experience is playing a bigger and bigger role in driving traffic, and yes, in driving links to the site.

Don’t be afraid to leap in and create such tools. Just make sure you still leave a trail that the crawler can follow.

Related reading

Using Python to recover SEO site traffic (Part three)
how to make SEO-friendly Javascript websites
Complete guide to Google Search Console