How Search Engines Find Documents

How Search Engines Find Documents
By: Kamlesh Patel

Every document on the Web is associated with a URL (Uniform Resource Locator). Inthis context, we will use the terms “document” and “URL” interchangeably. This is an oversimplification, as some URLs return different documents to the user depending on such factors as their location, browser type, form input etc., but this terminology suits our purposes for now.

To find every document on the Web would mean more than finding every URL on the Web. For this reason, search engines do not currently attempt to locate every possible unique document, although research is always underway in this area. Instead, crawling search engines focus their attention on unique URLs; although some dynamic sites may display different content at the same URL (via form inputs or other dynamic variables), search engines will see that URL as a single page.

The typical crawling search engine uses three main resources to build a list of URLs to
crawl. Not all search engines use all of these:


Hyperlinks on existing Web pages
The bulk of the URLs found in the databases of most crawling search engines consists of links found on Web pages that the spider has already crawled. Finding a link to a document on one page implies that someone found that link important enough to add it to their page.


Submitted URLs
All the crawling search engines have some sort of process that allows users or Website owners to submit URLs to be crawled. In the past, all search engines offered a free manual submission process, but now, many accept only paid submissions. Google is a notable exception, with no apparent plans to stop accepting free submissions, although there is great doubt as to whether submitting actually does anything.


XML data feeds
Paid inclusion programs, such as the Yahoo! Site Match system, include trusted feed programs that allow sites to submit XML-based content summaries for crawling and inclusion. As the Semantic Web begins to emerge, and more sites begin to offer RSS (RDF Site Summary) news feed files, some search engines have begun to read these files in order to find fresh content.

Search engines run multiple crawler programs, and each crawler program (or spider) receives instructions from the scheduler about which URL (or set of URLs) to fetch next. We will see how search engines manage the scheduling process shortly, but first, let’s take a look at how the search engine’s crawler program works.

By: http://www.elitedatasolution.com


Back to Search Engine Marketing and Optimization Articles 

 

 



How to maximize sales potential with search engine friendly e-commerce technology.
4 ways search engine friendly shopping carts by SEO Commerce ™ improve your e-commerce site’s performance: 1. Boosts search engine ranking for individual product pages on your website. 2. Increases sales potential by making it easier to buy from you. 3. Requires much less time investment than shopping cart software packages. 4. Shows detailed performance reports with advanced information about shopping cart performance and buyer behavior.

ROI tracking puts actionable intelligence at your fingertips
ROI tracking and reporting gives business decision makers access to information you need to make decisions critical to the success of your business' online efforts quickly and effectively.

The Web's Best Guarantee
SEO Advantage has pioneered the Web's Best Guarantee, tailored to your needs. Our monthly plans are designed to support your long-term success and our unceasing pursuit of excellence. So everyone wins.