Spiders are the name given to the robots which visit your sites. They are named spider because they crawl across the web, visiting websites. When they land on a site they visit all the pages they can to reading the information available. What they do with this information is depends who is controlling the crawler. If you are thinking of having a web site built then there are certain design aspects to consider which will help or hinder a spider, so if you are looking for web design in birmingham then make sure they understand how to properly built a site so that it's robot friendly.

Read more about web design in birmingham

Who owns these spiders?

Anyone can own a spider, some are sold whilst others are built. There are plenty of companies who own spider software. Many will sell you the software so that you can crawl websites yourself. Other spiders are used by search engines. Google uses several different spiders to crawl the web. Each one is looking for certain information in particular. Google then gathers this information and then uses, in part, to rank a website.

How Spiders works

Different spiders work in different ways but they follow similar patterns. When a spider lands on a page, it will look for things like a page title (), main heading (h1), meta description. It will then read the other information on that page, the main body of text. If there are hyper links on that page the spider will then visit them and repeat the process. The whole time it is logging the information. Spiders can't 'read' or see images, videos, flash animation like a human but it can gain use full information from them by reading the file name and the alt text.

Spiders and links.

Spiders will always follow a link to its destination unless it's told not to. This can be done in several ways. Adding a no follow tag to the link's code (best for a link going to an external site), adding a disallow line to the robots.txt file will stop spiders going to that page (internal links only) or having that page password protected will also stop a spider. Some spiders how ever do not respect no follows and robots.txt.

Making a site spider friendly.

Have links on your site is the key part, but also clearly labelling different areas of the site are helpful to the spider. Also no using href tag incorrectly will help the spider not to get lost.

Spider and search engines

As most search engines uses spiders to visit sites and understand them, making them spider friendly can help with search rankings. Clearly setting out the on-page data, having menu links, internal links in the content going to our relevant pages, having alt text for images and naming their files clearly are all helpful things to go. Another factor is having links from other sites pointing at our site. As the spider crawls there site they will see your site and then hopefully visit said site.

In conclusion it is very beneficial to have a site which is spider friendly, as it will have a impact on your search present. If you are looking to have a site built make sure the company building the site know this.