Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters only needed to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[5] The process involves a search engine spider downloading a page and storing it on the search engine's own server. A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains. All of this information is then placed into a scheduler for crawling at a later date.
You’re spot on, thanks again for sharing these terrific hacks. I remember you said on a video or post that you don’t write every time. Right that why you always deliver such valuable stuff. I have to tell you Backlinko is one of my favorite resources out of 3. I’ve just uncover SeedKeywords and Flippa. As LSI became more crucial SeedKeywords seems to be a tool to be considered.
Nice work Laura! This is going to be a great series. I'm working my way through SEOmoz's Advanced SEO Training Series (videos) Vol. 1 & 2 to build upon the advice and guidance that you and your team provided to me during my time at Yahoo!. Now many others will benefit from your knowledge, experience and passion for SEO strategy and tactics. Best wishes for great success in your new role.
×