Huge databases that generate Web site content on the fly can be the bane of search engine spiders’ existence. They can’t find pages; they can’t see URLs. So they can’t index pages. In a two-part SearchDay series, “Search Engine Visibility and Site Crawlability, Part 1,” and “Search Engine Visibility and Site Crawlability, Part 2,” Eric Enge looks at key problem areas with sites that have dynamically generated content, including information architecture and keyword research; robots.txt files; and the use of Sitemaps.
February 12, 2016