It is known as deep internet, internet invisible or hidden internet to internet content that is not indexed by conventional search engines due to various factors. The term is attributed to computer scientist Mike Bergman.4 It is the opposite of the superficial Internet.
Origin
The main cause of the existence of the deep internet is the inability of search engines (DuckDuckGo, Google, Yahoo, Bing, etc.) to find or index much of the information on the internet. If searchers had the ability to access all the information then the magnitude of the "deep internet" would be reduced almost in its entirety. However, although search engines could index the information from the deep internet, this would not mean that it will cease to exist, as there will always be private pages. Search engines can not access the information on these pages and only certain users, those with passwords or special codes, can do so.
Size
The deep internet is a set of websites and databases that common search engines can not find since they are not indexed. The content that can be found within the deep internet is very broad.
The internet is divided into two branches, The deep and superficial internet. The superficial internet consists of static or fixed pages, while the deep web is composed of dynamic pages. Static pages do not rely on a database to display their content but reside on a server waiting to be retrieved, and are basically HTML files whose content never changes. All changes are made directly in the code and the new version of the page is loaded on the server. These pages are less flexible than dynamic pages. Dynamic pages are created as a result of a database search. The content is placed in a database and is provided only when requested by the user.
In 2010 it was estimated that the information found on the internet deep is 7500 terabytes, which is equivalent to approximately 550 billion individual documents. The content of the deep internet is 400 to 550 times greater than what can be found in the superficial internet. In comparison, it is estimated that the superficial internet contains only 19 terabytes of content and one trillion individual documents.
Also in 2010 it was estimated that there were more than 200,000 sites on the internet deep.
Estimates based on extrapolation from a University of California study in Berkeley speculate that the deep internet should now be about 91,000 terabytes.
The Association for Computing Machinery (ACM) published in 2007 that Google and Yahoo indexed 32% of the deep internet objects, and MSN had the smallest coverage with 11%. However, the coverage of the three engines was 37%, indicating that they were indexing almost the same objects.
It is predicted that about 95% of the internet is deep internet, they also call it invisible or hidden, the information it hosts is not always available for use. For this reason tools have been developed as specialized search engines to access it.
Reasons
Reasons why search engines can not index some pages:
- Contextual Web: pages whose content varies depending on the context (for example, the client's IP address, previous visits, etc.).
- Dynamic content: Dynamic pages retrieved in response to parameters, for example, data sent through a form.
- Restricted content: Password-protected pages, Captcha-protected content, etc.
- Non-HTML content: textual content in multimedia files, other extensions such as exe, rar, zip, etc.
- Software: Intentionally hidden content, which requires a specific program or protocol to access (examples: Tor, I2P, Freenet)
- Pages not linked: pages that search engines have no reference to their existence, for example, pages that do not have links from other pages.
Deep internet resources
Deep internet resources can be classified into the following categories:
- Limited access content: sites that limit access to your pages in a technical way (For example, using the standard for excluding bots or captcha, which prohibit search engines from browsing and creating cached copies. 22
- Dynamic content: Dynamic pages that return a response to a submitted question or access through a form, especially if input elements are used in the open domain as text fields.
- Unlinked content: pages that are not connected to other pages, which may prevent web crawlers from accessing the content. This material is referred to as pages without inbound links.
- Scheduled content: pages that are only accessible through links produced by JavaScript, as well as content downloaded dynamically from web servers through Flash or Ajax solutions.
- No HTML content: textual content encoded in multimedia (image or video) files or specific file formats not treated by search engines.
- Private web: sites that require registration and a password to log in
- Contextual Web: Pages with different content for different access contexts (eg, customer IP address ranges or previous browsing sequence).
What is the Deep Web?
The Deep Web is no place for impulsive, morbid and inexperienced
A significant percentage of Internet users know nothing beyond what Google wants to show them. There is life beyond, in fact 96% of the Internet content is far from Google and other similar search and indexing engines, such as Yahoo or Microsoft. In this article we explain everything you need to know about the Deep Web, we teach you how terrifying it is, and we end up with some myths about it.
They are lines of opinion, and it is necessary to begin this way to avoid misunderstandings, but the information we offer is objective and based directly on the Internet browsing experience, but let yourself enjoy the terrifying Deep Web with some small trimmings Which, quietly all, will not lead you to misconceptions.
Escrito por
The Hidden Wiki
The Hidden Wiki is an encyclopedia similar to Wikipedia that is hosted in the brain and functions as an index to access an .onion domain page.
Contents of The Hidden Wiki
This site is characterized by using wiki code and because, although it belongs to a wiki-type project, it has its own domain: «.onion» that would replace the «.com» domain.
Typically, articles are written by a few people and not as a contribution, because of their location and difficult location on the network. She was forced by US state forces. UU. To close on March 10, 2014. To prevent the imminent closure The Hidden Wiki has changed its server and domain to another.
It only works as an index to access other pages of the same type «.onion».
Anti-child pornography operation 2011
Anonymous declares war on child pornography
The group of digital activists launches #OpPedoChat, its operation against websites of pederasty.
The group of digital activists Anonymous has already pointed to its next target. On this occasion and for the second time, he attacks web sites of pedophilia and child pornography. Its objective, "to tithe, if not to eradicate, this plague that ravages the internet. For the sake of our followers, for the sake of humanity and for pure fun, we will expel from the internet and destroy any portal that continues to operate.
The #OpPedoChat operation has attacked a multitude of web pages of pedophile content. According to Anonymous, their goal is not only to destroy what they find, but to discover and reveal personal data of pedophiles to expose them publicly. In October 2011 they attacked more than 40 websites with similar themes and unveiled the data of 1,500 pederasts.
According to Anonymous, at least 85 web pages with photographs and videos of minors, in addition to other apology for pederasty, have already fallen under their attacks. The operation, they announce, will run for weeks, and call for collaboration to expand its reach.
Anonymous has released a video release. The image of a masked Guy Fawkes mask - one of its symbols - and a robotic locution claim attacks and calls for the participation of others. Within the video itself, as a text message, say that "although Anonymous supports freedom of expression and a free internet, what he does not support is that people can steal a child's childish innocence."





