Site architecture and SEO file/page issues - SEM 101

If you care about your website being found in search, you’ll want to help the search crawler do its job. Or at the very minimum, you should remove any obstacles under your control that can get in its way. The more efficiently the search engine bot crawls your site, the higher the likelihood that more […]

If you care about your website being found in search, you’ll want to help the search crawler do its job. Or at the very minimum, you should remove any obstacles under your control that can get in its way. The more efficiently the search engine bot crawls your site, the higher the likelihood that more of its content that will end up in the index. Here’re some suggestions to SEO site architecture issues and solutions related to files and pages:

  • Use descriptive file and directory names – avoid using underscores as word separators; instead use hyphens.
  • Limit directory depth –  make your site’s directory structure shallow, no deeper than four child directories from the root.
  • Limit physical page file size –  Keep your individual webpage files down under 150 KB each.
  • Externalize on-page JavaScript and CSS code – helps simplifies code maintenance issues, shorten webpage, and can be used by multiple pages simultaneously. Follow these examples on how to reference external JavaScript and CSS code in your HTML pages. 
  • Use 301 redirects for moved pages to avoid losing all of your previously earned site ranking “link juice.” Follow these Windows Server Internet Information Server (IIS) or Apache HTTP Server guidelines.
  • Avoid JavaScript or meta refresh redirects
  • Implement custom 404 pages
  • Other crawler traps – search engine bot doesn’t see the Web as do you, so avoid frames on your website, forms, and authentication pages. To keep the search engine bot from going places that might trip it up, see the “noindex” attribute to prevent indexing of whole pages.