Previous month:
October 2016
Next month:
April 2017

January 2017

GoogleBot Crawl Strategy for SEO

One important facet of Search Engine Optimization is understanding the depth and frequency of the GoogleBot crawl. Said another way, there are strategies for maximizing the number of pages indexed or crawled by the search engines. This is particularly important for sites that are not frequently updated, that do not focus on recent news or those that are 1,000 pages or more.

While at it, it would also be a good idea to review your server log files to ensure that campaigns are being crawled correctly, that traffic sources are being attributed to the right place and that any acquisition data is being properly tracked.

What is a server log file?

A server log file is a file that contains every request for a file from a server. It includes the requested URL, IP address, date and time of the request.

What do we care about server log files?

The log files indicate when a site has been crawled, access problems and any errors to name a few.  The log files contain the response codes such as 200, which means that there was a problem with page access.

Resources

 Here are some helpful resources to understand how to address both the crawl and server log issue: