Spiders–creepy and crawly, but in this form, very good.
A search engine “spider” also known as a “crawler” is a software program that search engines such as Google use to find out whats out there on the web. The web is a huge place, so something needs to travel around and see whats offered on it every second of every day, and the spider is it.
The spider looking at your information follows all of your hyperlinks on each page after the page is loaded. Much like a spider crawls through a web and finds all insects that get stuck in it, the “spider” on the web crawls around web sites and will eventually find your information.
When a spider visits your web page, the content on your page gets loaded into a database (picture a gigantic excel file the size of your city) After your web page has been retrieved, the search engines loads your content into their index, like drawers and drawers of index cards, your words get organized.
In SEO the spider goes out and finds your pages, then they break down all of your words on your page and then all of your URLs are fed back into the SEO program.
The first thing a spider does when it visits your page is look for a file called “robots.txt.” It is a special file that tells the spider what to index and what not to index and if the spider doesn’t find the page, the page will be thrown out, hence why you may not get recognized in a search engine.
The only way for a spider to see your information is for it to have a robots.txt file. A spider will find your page by following hyperlinks or “found pages.”
Search engine may have a URL submission form in which you will want to request that they add your site to their index, this is a good idea to do in most cases. One last thing I have learned is that if you are submitting your site to a search engine, it is very important to not submit it to the sites you find or software you can purchase that will submit your site to hundreds of engines, this does not work. More and more links you have on your site will also improve rankings.