How Search Engines Find Documents

How Search Engines Find Documents
By: Kamlesh Patel

Every document on the Web is associated with a URL (Uniform Resource Locator). Inthis context, we will use the terms “document” and “URL” interchangeably. This is an oversimplification, as some URLs return different documents to the user depending on such factors as their location, browser type, form input etc., but this terminology suits our purposes for now.

To find every document on the Web would mean more than finding every URL on the Web. For this reason, search engines do not currently attempt to locate every possible unique document, although research is always underway in this area. Instead, crawling search engines focus their attention on unique URLs; although some dynamic sites may display different content at the same URL (via form inputs or other dynamic variables), search engines will see that URL as a single page.

The typical crawling search engine uses three main resources to build a list of URLs to
crawl. Not all search engines use all of these:


Hyperlinks on existing Web pages
The bulk of the URLs found in the databases of most crawling search engines consists of links found on Web pages that the spider has already crawled. Finding a link to a document on one page implies that someone found that link important enough to add it to their page.


Submitted URLs
All the crawling search engines have some sort of process that allows users or Website owners to submit URLs to be crawled. In the past, all search engines offered a free manual submission process, but now, many accept only paid submissions. Google is a notable exception, with no apparent plans to stop accepting free submissions, although there is great doubt as to whether submitting actually does anything.


XML data feeds
Paid inclusion programs, such as the Yahoo! Site Match system, include trusted feed programs that allow sites to submit XML-based content summaries for crawling and inclusion. As the Semantic Web begins to emerge, and more sites begin to offer RSS (RDF Site Summary) news feed files, some search engines have begun to read these files in order to find fresh content.

Search engines run multiple crawler programs, and each crawler program (or spider) receives instructions from the scheduler about which URL (or set of URLs) to fetch next. We will see how search engines manage the scheduling process shortly, but first, let’s take a look at how the search engine’s crawler program works.

By: http://www.elitedatasolution.com


Back to Search Engine Marketing and Optimization Articles 

 

 



Pioneer of the Web’s Best Guarantee
If you’ve worked with a search engine optimization company before, you’ve probably heard that it's impossible to guarantee search engine results. If you’re lucky, an SEO firm might offer some kind of refund in case they don’t live up to their promises. (But good luck trying to get your money back!)

Improve Your Online Marketing ROI
Online marketing performance tracking provides the basis for intelligent decision making. It lets you adjust and fine-tune your online marketing initiatives to optimize conversion rates, marketing campaigns, and website ROI.

The Web's Best Guarantee
SEO Advantage has pioneered the Web's Best Guarantee, tailored to your needs. Our monthly plans are designed to support your long-term success and our unceasing pursuit of excellence. So everyone wins.