Crawling – Googlebot / Spiders / Robots
• Indexing – Can the site be ‘read’
• Serving – relevant content

Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index. They use a huge set of computers to fetch (or “crawl”) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, they process information included in key content tags and attributes, such as Title tags and ALT attributes. Googlebot can process many, but not all, content types.

When a user enters a query, their machines search the index for matching pages and return the results they believe are the most relevant to the user. Relevancy is determined by over 200 factors, one of which is the Page Rank for a given page. Page Rank is the measure of the importance of a page based on the incoming links from other pages.

The Basics
• Design your site for users with SEO in mind not the other way around.
• Make sure each page is clearly linked on the site.
• Keep the navigation simple and understandable – do not overload the user with too many links.
• Include a site map with links to each page.
• Make sure each page contains relevant content that matches its purpose.
• Do not try to cheat!
– ‘Black Hat’
• Submit your site to Google –

Please be sure to leave your comments, feedback, re-tweet or just add valuable information to this blog as our goal is to educate car dealers so they can be better today than they were yesterday. Make sure you follow Liquid Motors on facebook, Twitter and our blog for a variety of information for car dealers.