deep Web

In addition to the visible part of the search engines for WWW there are many web pages that they are not covered. In this case the user's access to such resources is quite possible without entering logins and passwords. Typically, these web pages are available online, but come on them is difficult and sometimes impossible, unless you know the exact address (or a specific access rules.) These resources have been for many years as have their own name - "deep" (deep) web, which introduced Illsvort Jill (Jill Ellsworth) in 1994, designating them the documents that are inaccessible to conventional search engines.

Today, these resources are referred to as "invisible", or "hidden" (invisible), web. They often include a dynamically generated web pages with content that is stored in databases and is available only upon request. Sometimes, for access to these pages, a so-called Turing test (or test of reasonableness) is proposed to solve an arithmetic problem, puzzle or simply put in a particular field of characters depicted graphically. In 2000 an American company BrightPlanet ( published a sensational report, which states that in the web space hundreds of times more pages than their index was the most popular search engines. The same company has developed a program LexiBot, which allows you to scan some dynamic web-village, formed from the database and running it was unexpected data.

Founder Michael Bergman BrightPlanet (Michael K. Bergman) identified 12 kinds of "hidden" Web Resources ( / ub / biv / specials.htm), belonging to a class of online databases. In the list were traditional databases (Patents, medicine and finance) and public resources - the announcement of job search, chat rooms, libraries, reference books. Bergman referred to as "hidden" resources and specialized search engines that cater to specific industries or markets, databases are not included in the global directory of traditional search services.

By "hidden" Webu also include numerous systems to interact with the users - help, advice, training, requiring the participation of people to generate dynamic responses from servers. They can also be attributed, and closed (fully or partially) the information available to users only with specific addresses or address groups, and sometimes cities or countries.

Go to the "hidden" parts of web space, many rank as a web page and registered on free servers that are indexed, at best, only partially - the search engines to avoid spam advertising do not seek to circumvent them in full. An entire category of so-called gray documents placed in an environment of dynamic content management systems (Dynamic Content Management Systems), also refers to "deep" web. In search engines is usually limited to the depth of indexing such sites in order to avoid possible cycle through the same pages. And, of course, the "hidden" are web-based resources, the creators are not alert anyone to create these resources.

The "deep" Web, there are many alternatives to commercial databases such as Dialog or Lexis-Nexis. For example, a database of legal documents of Ukraine and Russia (the system "Rada" or "Code" respectively) can be attributed entirely to this category, as featured in hundreds of thousands of documents available for free viewing, do not fall in the indices of global network information retrieval systems.

© 2010 - 2019 D@nVitLabs