Internetworking: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Howard C. Berkowitz
(Restoring old article, history lost, but hopefully we can start afresh)
m (Text replacement - "nuclear weapon" to "nuclear weapon")
 
(5 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{subpages}}
{{subpages}}
{{TOC|right}}
{{TOC|right}}
[[Image:Internet map 1024.jpg|right|thumb|250px|A map graphically displaying interconnections on the public Internet (known as [[Router|routes]]). These routes are managed via the dynamic [[routing protocol]] [[Border Gateway Protocol]]]]
{{seealso|Development of the Internet}}
{{seealso|Internet architecture}}
{{seealso|Internet Protocol Suite}}
The '''[[Internet]]''' is a term with many meanings, depending on the context of its use
<ref name="Internet">{{cite book
| last = Comer | first = Douglas E. |
  title = Computer Networks and Internets | publisher = Pearson Prentice-Hall
| date = 2009 | location = Upper Saddle River, NJ  isbn = 978-0-13-606127-3
}}</ref>. To the general public in 2009, the term is often used synonymously with the [[World Wide Web]], its best-known application.<ref name=Okin>{{citation
| last = Okin | first = J. R.
| title = The Information Revolution: The Not-for-dummies Guide to the History, Technology, And Use of the World Wide Web
| publisher = Ironbound Press | date = 2005 | location = Winter Harbor, ME| isbn = 0-9763857-4-0
}}</ref>, although there are many other applications in active public use. These include [[electronic mail]], [[streaming media]], such as internet radio and video, a large percentage of [[telephone traffic]], [[system monitoring]] and [[System Control And Data Acquisition|real-time control]] applications, to name a few. Prior to the Web, [[electronic mail]], [[usenet]] based [[newsgroup]]s, [[gopher]] and [[file transfer]] were the major applications.


The '''Internet''' is a "network of [[computer network|networks]]" best known as the global network on which a wide range of applications and networking experiments runs, using technologies of the [[Internet Protocol Suite]].
In one respect the Internet is similar to an iceberg. The vast majority of it is out of sight. While these [[distributed applications]] allow users to utilize [[internet services]], in the context of [[convergence of communications]], they require a large suite of technologies visible only to the enterprises that provide them.  


To the general public, the [[World Wide Web]] is the best-known application, but many other applications, such as electronic mail and a large percentage of telephone calls run over Internet protocols. Web browsers are perhaps the most common user programs that access the Internet, but web browsers translate human requests to the [[Hypertext Transfer Protocol]] that actually runs from browser to Web server.  
To [[Internet Service Provider]]s, the '''Internet''' identifies these underlying services. Some of these internet services that are accessible to the general public, while the same technologies providing similar services are available in restricted environments, such as those in an enterprise [[intranet]], in military and government [[private internet]]s and in local [[home networks]]. Further complicating the notion of an Internet is the frequent interconnection of public and private networks in ways that allow limited interaction.  


The Internet itself has no direct human interfaces; every user-visible function must go through a program resident on a client or server computer. There are literally hundreds of different [[protocol (computer)|protocols]], applications and services that run over the Internet.  [[Virtual private network]]s interconnecting the parts of individual enterprises, or sets of cooperating enterprises, overlay the Internet. A wide range of interconnected networks using the same protocols as the public Internet, but isolated from it, provide services ranging from passing orders to launch [[nuclear weapon]]s, authorizing credit card purchases, collecting intelligence information, controlling the electric power grid (see [[System Control And Data Acquisition]]), [[telemedicine]] such as transferring medical images and even allowing remote surgery, etc.  
This article <!--and the subgroup it describes--> uses the term Internet in the broadest sense. That is, it identifies the applications that provide an interface between users and [[communications services]], those services themselves, public and private instances of application and communications services and the aggregation of private and public networks into a global communications and application resource.


To Internet operators, however, the public Internet is the set of interconnected, separately administered networks, work through [[address registry|addressing registries]] to ensure unique addresses, and exchange information on reachability using the [[Border Gateway Protocol]]
==The history of the Internet==
{{main|development of the Internet}}
The [[development of the Internet]] shows it as the culmination of significant activity in both the commercial world as well as within government sponsored programs. While the main development occurred in the United States, there were major contributions from researchers and engineers in the U.K., France and other parts of Europe. This work led to the existing architectural model.


==Formative concepts==
==The architecture of the Internet==
The first functional networks between individual computers were created in the early 1970s. These networks, however, assumed the computers ran common software and protocols. Some of the networks were proprietary to computer vendors, such as IBM's Binary Synchronous Communications and the 1974 System Network Architecture, Xerox Network Services, Digital Equipment Corporation DECnet. [[X.25]] was a nonproprietary standard, but used a different architecture than would the [[datagram]] networks such as the ARPANET and Internet.


There were several intermediate steps between the ARPANET, to which access was strictly controlled, and today's ubiquitous Internet. See [[#History|History]]
In order to engineer the internet, internet designers and engineers place its services into one of several layers, which in total comprise the [[Internet protocol architecture]]<ref name="Arch">{{cite web |title=RFC1958: Architectural Principles of the Internet |url=http://www.ietf.org/rfc/rfc1958.txt |date=June 1996 |work= |publisher=Internet Engineering Task Force |accessdate=Sept. 17, 2009}}</ref>. Internet architectural experts deprecate an overemphasis on layering; the more important principles of Internet architecture include:
*End-to-End Principle:  Application intelligence is at the edge of the cloud; there have been variations on this principle.
*Robustness principle: "Be conservative in what you send, be liberal in what you receive."


While IPv4 will be present indefinitely, it is limited in its capability for modern functions, and an evolution is in process to [[Internet Protocol version 6]] (IPv6). Internally, the Internet is divided into [[Autonomous System]]s, which exchange information about the destinations they can reach, using the [[Border Gateway Protocol]] (BGP).
While there have been several different protocol architecture designs, the one with the strongest support consists of 4 layers: 1) the application layer, 2) the transport layer, 3) the internet layer, and 4) the link-layer.<ref name="Arch" /><ref name="Routers">{{cite web |title=RFC1812: Requirements for IP Version 4 Routers |url=http://tools.ietf.org/html/rfc1812 |date=Dec. 1, 2006 |work= |publisher=Internet Engineering Task Force |accessdate=Sept. 17, 2009}}</ref>. Each protocol layer utilizes the services of the next lower layer (except the lowest, the link layer) to provide a value-added service to the layer above it (except for the application layer, which provides services to users). Utilizing this protocol architecture, it is possible to describe how the Internet works.


Louis Pouzin first introduced the idea of a generalized method of interconnecting networks of computers rather than individual computers, which he termed a '''catenet'''<ref name=Pouzin>{{citation
[[Web browser]]s are the most common user interface in the Internet. Such browsers translate human requests to the [[Hypertext Transfer Protocol]] (HTTP), which actually moves data between the browser and a [[Web server]]. Consequently, measured solely in terms of percentage of use, the World Wide Web is the most frequently used Internet application. (However, this is expected to change. Forecasts of Internet bandwidth utilization suggest that video traffic will make up over 90% of Internet traffic by 2013<ref name="TrafficGrowth">{{cite web |title=Cisco Visual Networking Index:Forecast and Methodology, 2008–2013 |url=http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_paper_c11-481360.pdf |date=June 9, 2009 |work= |publisher=Cisco Systems, Inc. |accessdate=Sept. 16, 2009}}</ref>. ). The communications services provided by the Internet have no direct human interfaces; every user-visible function must go through a program resident on a client or server computer. There are literally hundreds of different [[protocol (computer)|protocols]], applications and services that run over the Internet.  [[Virtual private network]]s interconnecting the parts of individual enterprises, or sets of cooperating enterprises, overlay the Internet. As mentioned previously a wide range of interconnected networks using the same protocols as the public Internet, but isolated from it, provide services ranging from passing orders to launch nuclear weapons, authorizing credit card purchases, collecting intelligence information, controlling the electric power grid (see [[System Control And Data Acquisition]]), [[telemedicine]] such as transferring medical images and even allowing remote surgery, etc. Many of these applications utilize custom [[application programming interface]]s (API) that do not involve a web browser, although web programming also uses AOIs Consequently, internet distributed applications comprise a much larger set than those visible to the general public.
| author = Pouzin, L.
| title = A Proposal for Interconnecting Packet Switching Networks
| conference = Proceedings of EUROCOMP
| publisher = Bronel University
| date = May 1974
| pages = 1023-36
| comment = not known to be online}}</ref>, but the model needed refinement. Such refinement took place under the sponsorship of the [[United States Department of Defense]]'s Advanced Research Project Agency (ARPA), later renamed the [[Defense Advanced Research Projects Agency]] (DARPA). ARPA was formed to meet a number of perceived Cold War technology challenges, and was established in 1958 as the first U.S. response to the Soviet launching of [[Sputnik]]<ref name="DARPA1">{{cite web|url=http://www.darpa.mil/body/overtheyears.html|title=Defense Advanced Research Projects Agency|publisher=United States government|year=2003|accessdate=2007-05-12}}</ref>.


Among ARPA's areas of interest was the interconnection of networks, under the management and inspiration of  J.C.R. Licklider 1915-1990<ref>{{citation
In addition to applications that are directly experienced by Internet customers, there are a wide-range of internet applications that exist to provide [[infrastructure services]] to the internet. Examples of infrastructure services are the [[Domain Name Service]] (DNS), which associates computers connected to the Internet with human friendly names. The movement of data through the internet requires that it visit intermediate systems called [[router]]s. The activity of directing the data through the internet, called [[routing]], utilizes an infrastructure application that distributes routing data to routers. The [[secure identification]] of users to applications requires the use of [[authentication servers]], such as [[RADIUS]] and [[Kerberos]], each of which is a distributed application in and of itself. These are just a few of the internet infrastructure applications that support the provision of internet service.
| title = Internet Pioneers: J.C.R. Licklider
| url = http://www.ibiblio.org/pioneers/licklider.html}}</ref>, one of the pioneers of cooperative research. Vint Cerf extended Pouzin's catenet model as the basis for what was to become the [[ARPANET]]:<blockquote>The U.S.
DARPA research project on this subject has adopted the term to
mean roughly "the collection of packet networks which are
connected together."  This is, however, not a sufficiently
explicit definition to determine, for instance, whether a new
network is in conformance with the rules for network
interconnection which make the catenet function as confederation
of co-operating networks.<ref name=CerfIEN48>{{citation
| id = IEN 48
| title = The Catenet Model for Internetworking
| first = Vint | last = Cerf
| date = July 1978
| url = http://www.isi.edu/in-notes/ien/ien48.txt}}</ref></blockquote>
===ARPANET===
Cerf extended the concept of catenet to be usable in a specific research network, the ARPANET. ARPANET was the first large-scale "network of networks" using common mechanisms to interconnect disparate networks. It was funded through ARPA, and access was limited to selected universities, research organizations, and government agencies.  


Contrary to widespread legend, it was never intended to be a military network survivable under nuclear attack, although other, not necessarily packet-switching, networks, were intended for warfighting. [[Paul Baran]], at the [[Rand Corporation]], is usually associated with the idea of survivable routed networks.<ref name=Baran-Dist101>{{citation
Internet applications are distributed<ref name="Dist">{{cite web |title=Distributed Computing: An Introduction |url=http://www.extremetech.com/article2/0%2C1697%2C11769%2C00.asp |work= |publisher=ExtremeTech |accessdate=16 Sept., 2009}}</ref>. That is, they normally are comprised of components that reside at different locations. That means they must exchange data through communications equipment that is subject to various failure modes. Furthermore, one element may have the capability to send data faster than the receiver can process. The next layer in the protocol architecture, the [[transport layer]], provides services that address these issues. Transport layer protocols, like the [[Transmission Control Protocol]] (TCP) provide [[end-to-end error management]] and [[flow-control]] services that ensure application elements can exchange data in an [[fault-tolerant]] and synchronized manner.  
| url = http://www.rand.org/pubs/research_memoranda/RM3420/
| author = Baran, Paul
| id = Rand RM-3420-PR
| title = On Distributed Communications: I. Introduction to Distributed Communications Networks
| year = 1964}}</ref> Baran's extensive work influenced the development of packet switching, but there are distinctions between his contributions to the technology and to his solutions to specific military problems.


High-availability, but not necessarily packet-switching, nuclear command networks started as early as the 1963 SACCS, specific to the [[Strategic Air Command]].<ref name=SAC>{{citation
Rather than relying the error and flow-control services provided by TCP, some applications handle these services themselves. Those that do, utilize a [[datagram]] service also provided by the transport layer. For example the [[User Datagram Protocol]] (UDP) moves packets between application parts without the provision of either error-control or flow-control services. Certain applications, such as [[Voice over Internet Protocol]], can tolerate some errors but are extremely intolerant of delay, so error correction is not used.
| title = Strategic Automated Command Control System
| url = http://www.fas.org/nuke/guide/usa/c3i/saccs.htm
}}</ref> Other warfighting networks included the even earlier Minimum Essential Emergency Command Network (MEECN), which was made up of multiple networks that were interconnected only through human intervention.<ref name=MinotMEECN>{{citation
| first = Carla | last = Williams
| date = 17 November 2005
| title = Minot completes final MEECN modifications
| url = http://www.afspc.af.mil/news/story.asp?id=123024513}}</ref>  Even today, the military warfighting networks, such as [[NIPRNET]], [[SIPRNET]], [[JWICS]] and [[Warfighter Information Network]] are separate from the Internet.  


Many educational institutions and corporations began joining the network, and in 1983 all nodes on the ARPANET changed over at once to [[Internet Protocol version 4]] (IPv4), which is still in use on the internet today.  While this is often referred to as "TCP/IP", they are two distinct protocols: [[Transmission Control Protocol]] (TCP), and IPv4. In a literal sense the written term TCP/IP can be read aloud as "TCP over IP" and would be a correct description.
The next layer of internet service, the [[Internet layer]] moves data between [[end-systems]] (normally customer computers, but in some cases infrastructure systems) through an interconnected set of systems, called routers, which are mentioned above. Routers come in all shapes and sizes. Some, normally located at the periphery of the internet such as those in a home or small business, are known as [[edge routers]]. Others are service provider equipment with varying capabilities, from modest performance [[border routers]] to high performance [[core routers]]. These routers are interconnected, moving data across the Internet in a way that increases the probability of successful transit. There are two types of routing schemes. [[Virtual circuit routing]] reserves resources over a fixed path between two end-systems. [[Packet routing]] operates in a way whereby individual [[packet]]s of data may take different paths through the systems that interconnect end-systems. The internet layer also supports specialized data services, such as [[multicast]], [[broadcast]], and [[anycast]] routing.
===MILNET and its descendants===
In 1983, specifically military research, even though unclassified, split off the ARPANET into MILNET. There were gateways for specific projects,  mostly from military to nonmilitary. Email could still pass fairly freely. There were a wide range of internal military networks using the technologies being researched. The research networks had their results shared, but the operational networks tended to suffer from the urban legend that people told about them had to die.


Through several intermediate generations, MILNET essentially has become the unclassified [[NIPRNET]]. There are highly-interconnected networks for authorized users at various classified levels, including [[SIPRNET]] and [[JWICS]], which, at the user interface level, work much like the Internet.
Routers and end systems connect to each other through the [[Link layer]]. This layer may comprise a [[physical channel]] or a complex [[networking infrastructure]]. Both are commonly deployed options.


[[United States intelligence community]] networks, at the packet level, may or may not make use of the shared military networks. For various web- and wiki-like developments at the government-only and classified level, see [[Intellipedia]]. There are Intellipedia versions to which U.S. allies have access, at varying levels of sensitivity.
Physical channels encode data utilizing various techniques, thereby providing the basic data transmission service between directly connected equipment. There are a wide variety of physical channels, each utilizing its own data encoding scheme. Examples of physical channels used in the Internet are Ethernet-family protocols over copper or optical cable, [[wireless local area network]]s, such as those used in [[Wi-Fi]], also known as [[802.11]], as well as in [[cellular telephony]]; and wireless point-to-point [[radio]]. Since physical channels may introduce [[communications errors]] and generally do not provide [[flow control]], the link-layer may provide services that correct most errors and also implement flow control. The characteristics of the physical channel may vary widely from the fairly reliable [[ethernet]], less reliable [[wireless]] channels, to the very unreliable [[deep space radio]] channels. Consequently, each type of physical channel may require a different link-layer protocol to accommodate its characteristics. For example, Ethernet channels provide only [[error detection]] and no flow control services. Low to moderate data rate serial channels, on the other hand, may provide acknowledgment based error and flow control.
===NSFNET===
The change to IPv4 allowed the next evolution, in 1984, to the [[NSFNET]], originally a U.S. government funded network that linked five supercomputer centers. At these centers, authorized research and academic users could connect their networks and access both the supercomputers and other resources. <ref name=NSFNET>{{citation
| url=http://www.nsf.gov/about/history/nsf0050/internet/launch.htm
| title = The Launch of NSFNET
| author = National Science Foundation}}</ref> The centers were interconnected by what seemed, at the time, to be extremely fast 56 Kbps lines;  9.6 Kbps lines had been more of a limit for most other installations. Its [[Acceptable Use Policy]] (AUP) restricted it to use by authorized academic, research and government users; parallel networks with more open AUPs evolved, although organizations often needed separate connections.


Routing in the NSFNET still had the concept of a hierarchy of core and regional networks. While completely obsolete now, the [[exterior routing protocol]] of the NSFNET was the Exterior Gateway Protocol (EGP), which evolved into the [[Border Gateway Protocol]] (BGP). BGP, now in Version 4, still can support hierarchy but is not restricted to it.  
When the link layer comprises networking infrastructure, it implements a technique known as [[protocol encapsulation]]. This scheme encapsulates the packets of the internet layer inside packets of the link layer network. Common examples are carrying internet traffic over an [[Multi-Protocol Label Switching]] (MPLS) or [[Asynchronous Transfer Mode]] (ATM) virtual circuit services, or over "[[Ethernet]]" ([[IEEE 802.3]], [[IEEE 802.11]], [[IEEE 802.16]], etc.) Sometimes it is useful to encapsulate internet packets inside other internet packets. For example, a private intranet may wish to interconnect several isolated sites using the services of the public internet. It protects its internet packets with a suitable security protocol, such as [[IPSec]] and places them inside the internet packets of the public network, which moves them between these isolated sites.


===ALTERNET===
The Internet utilizes not only technology acting within a single layer of its protocol architecture, but also mechanisms that are spread over several protocol layers. As mentioned previously, routing is one such technology using application services to move routing data to routers in order to provide the network-layer routing service. Another example is the provision of [[network security]] within the Internet. For example, providing [[Transport Layer Security]] requires the encryption of packets at end-systems This requires [[encryption keys]] that are distributed by a logically separate application. [[Network management]] may utilize an application layer protocol, such as the [[Simple Network Management Protocol]] (SNMP) in concert with a network-layer protocol, such as the [[Internet Message Control Protocol]] (ICMP).
In the early nineties, an increasing number of IP networks were commercially available, which connected to one another, and to NSFNET, in many ways both official and unofficial. ALTERNET was one. Other networks, not necessarily IP-based, connected to NSFNET, ALTERNET, or other networks via application-level gateways. In some cases, such as [[network news]], also called USENET, the specific application protocol was adapted to run over IP. In other cases, such as BITNET ("Because it's time" Network) or the academic CSNET, it might simply be an email, and possibly file, gateway. CSNET, incidentally, was NSF-funded, but for departments that did not meet the strict NSF requirements.


Nevertheless, the Internet routed approximately same packets as today.  The Internet of the time was not a fully public resource, and the research and academic users collaborated productively using electronic mail, file transfer, news, and other services.
==Professional societies and organizations==
===Commercialization===
Before [[AOL]] opened up [[USENET]] and other Internet resources in the late 80s, the Internet was first an environment for networking research, and second an environment to support other research and education. When anonymous access became common, the social environment changed radically. The environment was one of trust as well as collaboration; anonymous access was rare.
==Chronology==
===1950s to 1970s===
In 1968, ARPA contracted with [[Bolt, Beranek and Newman]] to build the ARPANET Network Management Center and the Interface Message Processor (IMP) routers, although the term "router" had not yet been coined.


The first testing of live [[ARPANET]] components started on  August 30, 1969, when BBN delivered the first customer IMP to an ARPANET site ([[Leonard Kleinrock]]'s lab at UCLA). A one-IMP ARPANET was still meaningful, because it allowed testing of the host computer interface to the IMP. Installation of the second IMP at the Stanford Research Institute (SRI) on 29 October 1969, and testing of IMP-IMP protocols began: the first wide-area [[routing]] system.
:(See External Links subpage for website homepages)


Among the first applications on the experimental networks was [[email]], on the ARPANET as well as vendor-proprietary networks and early commercial packet-switching networks such as [[TELENET]] and [[TYMNET]]. The latter networks used the [[X.25]] protocol that mimicked telephone calls, creating paths for each sender-receiver connection, as opposed to the connectionless Internet Protocol and its ancestors. Commercial email networks using X.25 were operational in 1972.
*International: [[Internet Society]] (ISOC), IEEE Communications Society (IEEE ComSoc), [[World Wide Web Consortium]] (W3C), Internet Technical Committee (ITC), [[Association for Computer Machinery]] [[Special Interest Group on Data Communications]] (ACM SIGCOMM), [[Internet Corporation for Assigned Names and Numbers]] (ICANN); [[International Telecommunications Union]] (ITU), International Electrotechnical Commission (IEC).
 
*[[North America]]: [[North American Network Operators Group]] (NANOG), [[American Registry for Internet Numbers]] (ARIN)
*[[DARPA]]
*[[Europe]]: European Telecommunications Standards Institute (ETSI), [[Réseaux IP Européens]] (RIPE), [[RIPE Network Coordination Centre]] (RIPE-NCC)
*[[BBN]] - Bolt, Beranek and Newman
*[[Asia]]: [[Asia-Pacific Network Information Center]] (APNIC), [[South Asian Network Operators Group]] (SANOG)
*[[NCP (ARPANET)]]
*[[Middle East]]: Middle East Network Operators Group (MENOG)
 
*[[Africa]][[African Network Operators Group]] (AfrNOG)
[[Hypertext]] had been invented by [[Ted Nelson]] around 1960, and hyperdocuments could be transferred as files; [[Tim Berners-Lee]] later pioneered dynamic access to hyperdocuments on servers, which was the start of the [[World Wide Web]].
*[[Pacific]]: The Pacific Network Operators Group (PacNOG)
 
*[[Latin America]]: [[Latin America and Caribbean Network Information Center]] (LACNIC); Latin America and Caribbean Region Network Operators Group (LACNOG)
===1975 to 1980===
*[[France]]: FRench Network Operators Group (FRnOG)
In 1979, the [[National Science Foundation]] established CSNET, to interconnect academic computer science departments and research institutions that were not eligible for ARPANET access.
*[[United States of America]]: Telecommunications Industry Association (TIA)
 
===1980 to 1985===
[[Tim Berners-Lee]], while a consulting engineer at CERN, wrote, for personal use, a program called ''Enquire''.  Enquire can reasonably be considered the first networked hypertext application, although it certainly did not have the extensive indexing structure that characterizes the modern Web.
 
Specifically military research split off the ARPANET into MILNET in 1983, although there was controlled access, mostly from military to nonmilitary. Email could still pass fairly freely. There were a wide range of internal military networks using the technologies being researched. The research networks had their results shared, but the operational networks tended to suffer from the urban legend that people told about them had to die.
 
Without any intended Orwellian symbolism, the early, experimental ARPANET. in 1984, evolved to the [[NSFNET]], originally a U.S. government funded network that linked five supercomputer centers. At these centers, authorized research and academic users could connect their networks and access both the supercomputers and other resources. <ref name=NSFNET>{{citation
| url=http://www.nsf.gov/about/history/nsf0050/internet/launch.htm
| title = The Launch of NSFNET
| author = National Science Foundation}}</ref> The centers were interconnected by what seemed, at the time, to be extremely fast 56 Kbps lines;  9.6 Kbps lines had been more of a limit for most other installations
 
===1985 to 1990===
NSFNET did not allow commercial traffic, but there was a gradual creation of networks, often simple pairings, of various computers and networks over which traffic ineligible for the NSFNET passed. In the early nineties, an increasing number of IP networks were commercially available, which connected to one another, and to NSFNET, in many ways both official and unofficial.
 
Nevertheless, the Internet routed approximately same packets as today.  The Internet of the time was not a public resource, and the research and academic users collaborated productively using electronic mail, file transfer, news, and other services.
 
Before [[AOL]] opened up [[USENET]] and other Internet resources in the late 80s, the Internet was first an environment for networking research, and second an environment to support other research and education. When anonymous access became common, the social environment changed radically. The environment was one of trust as well as collaboration; anonymous access was rare.
 
[[Malware]] such as [[worm]]s and [[virus]]es were rare; the first well-known breakin happened in 1986,<ref>{{citation
| author = Stoll, Cliff
| title = The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage
| publisher = Pocket | year = 1989}}</ref>
and the Morris worm hit in 1988.<ref>{{citation
| contribution =Security of the Internet
| title = The Froehlich/Kent Encyclopedia of Telecommunications vol. 15.
| publisher =Marcel Dekker
| year = 1997
| pages = 231-255
| url = http://www.cert.org/encyc_article/tocencyc.html}}</ref>
 
1988 saw a major speed increase in NSFNET, going from what were then powerful and expensive 56 Kbps lines to 1.5 Mbps links more than 24 times faster.
 
[[Tim Berners-Lee]] made the first proposal, in 1989, for the beginnings of a recognizable World Wide Web, based on his 1980 research on networked hypertext, which built on Ted Nelson's 1960 research on local hypertext.
===1990 to 1995===
The ARPANET was officially turned off in 1990, but with many replacement networks. In 1990-1991, the general experience was that traffic, and especially the number of new network routes, was doubling approximately every five months, but router capacity was doubling every 18 months. [[Classless inter-domain routing]] (CIDR), a variant of IP, bought time for routers to catch up, by making routing more efficient. By 1995, Version 4 of the [[Border Gateway Protocol]], which fully supported CIDR, was operationa.
 
NSFNET, which had essentially given up the [[Acceptable Use Policy]] of no commercial traffic, shut down in 1995. The National Science Foundation did create a new restricted IP network for supercomputing and high performance networking research,  the Very High Speed Backbone Network Service (vBNS).
==Impact on Society==
 
===[[Netiquette]]===


==References==
==References==
{{reflist | 2}}
{{reflist|2}}

Latest revision as of 17:02, 22 March 2024

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.
See also: Development of the Internet
See also: Internet architecture
See also: Internet Protocol Suite

The Internet is a term with many meanings, depending on the context of its use [1]. To the general public in 2009, the term is often used synonymously with the World Wide Web, its best-known application.[2], although there are many other applications in active public use. These include electronic mail, streaming media, such as internet radio and video, a large percentage of telephone traffic, system monitoring and real-time control applications, to name a few. Prior to the Web, electronic mail, usenet based newsgroups, gopher and file transfer were the major applications.

In one respect the Internet is similar to an iceberg. The vast majority of it is out of sight. While these distributed applications allow users to utilize internet services, in the context of convergence of communications, they require a large suite of technologies visible only to the enterprises that provide them.

To Internet Service Providers, the Internet identifies these underlying services. Some of these internet services that are accessible to the general public, while the same technologies providing similar services are available in restricted environments, such as those in an enterprise intranet, in military and government private internets and in local home networks. Further complicating the notion of an Internet is the frequent interconnection of public and private networks in ways that allow limited interaction.

This article uses the term Internet in the broadest sense. That is, it identifies the applications that provide an interface between users and communications services, those services themselves, public and private instances of application and communications services and the aggregation of private and public networks into a global communications and application resource.

The history of the Internet

For more information, see: development of the Internet.

The development of the Internet shows it as the culmination of significant activity in both the commercial world as well as within government sponsored programs. While the main development occurred in the United States, there were major contributions from researchers and engineers in the U.K., France and other parts of Europe. This work led to the existing architectural model.

The architecture of the Internet

In order to engineer the internet, internet designers and engineers place its services into one of several layers, which in total comprise the Internet protocol architecture[3]. Internet architectural experts deprecate an overemphasis on layering; the more important principles of Internet architecture include:

  • End-to-End Principle: Application intelligence is at the edge of the cloud; there have been variations on this principle.
  • Robustness principle: "Be conservative in what you send, be liberal in what you receive."

While there have been several different protocol architecture designs, the one with the strongest support consists of 4 layers: 1) the application layer, 2) the transport layer, 3) the internet layer, and 4) the link-layer.[3][4]. Each protocol layer utilizes the services of the next lower layer (except the lowest, the link layer) to provide a value-added service to the layer above it (except for the application layer, which provides services to users). Utilizing this protocol architecture, it is possible to describe how the Internet works.

Web browsers are the most common user interface in the Internet. Such browsers translate human requests to the Hypertext Transfer Protocol (HTTP), which actually moves data between the browser and a Web server. Consequently, measured solely in terms of percentage of use, the World Wide Web is the most frequently used Internet application. (However, this is expected to change. Forecasts of Internet bandwidth utilization suggest that video traffic will make up over 90% of Internet traffic by 2013[5]. ). The communications services provided by the Internet have no direct human interfaces; every user-visible function must go through a program resident on a client or server computer. There are literally hundreds of different protocols, applications and services that run over the Internet. Virtual private networks interconnecting the parts of individual enterprises, or sets of cooperating enterprises, overlay the Internet. As mentioned previously a wide range of interconnected networks using the same protocols as the public Internet, but isolated from it, provide services ranging from passing orders to launch nuclear weapons, authorizing credit card purchases, collecting intelligence information, controlling the electric power grid (see System Control And Data Acquisition), telemedicine such as transferring medical images and even allowing remote surgery, etc. Many of these applications utilize custom application programming interfaces (API) that do not involve a web browser, although web programming also uses AOIs Consequently, internet distributed applications comprise a much larger set than those visible to the general public.

In addition to applications that are directly experienced by Internet customers, there are a wide-range of internet applications that exist to provide infrastructure services to the internet. Examples of infrastructure services are the Domain Name Service (DNS), which associates computers connected to the Internet with human friendly names. The movement of data through the internet requires that it visit intermediate systems called routers. The activity of directing the data through the internet, called routing, utilizes an infrastructure application that distributes routing data to routers. The secure identification of users to applications requires the use of authentication servers, such as RADIUS and Kerberos, each of which is a distributed application in and of itself. These are just a few of the internet infrastructure applications that support the provision of internet service.

Internet applications are distributed[6]. That is, they normally are comprised of components that reside at different locations. That means they must exchange data through communications equipment that is subject to various failure modes. Furthermore, one element may have the capability to send data faster than the receiver can process. The next layer in the protocol architecture, the transport layer, provides services that address these issues. Transport layer protocols, like the Transmission Control Protocol (TCP) provide end-to-end error management and flow-control services that ensure application elements can exchange data in an fault-tolerant and synchronized manner.

Rather than relying the error and flow-control services provided by TCP, some applications handle these services themselves. Those that do, utilize a datagram service also provided by the transport layer. For example the User Datagram Protocol (UDP) moves packets between application parts without the provision of either error-control or flow-control services. Certain applications, such as Voice over Internet Protocol, can tolerate some errors but are extremely intolerant of delay, so error correction is not used.

The next layer of internet service, the Internet layer moves data between end-systems (normally customer computers, but in some cases infrastructure systems) through an interconnected set of systems, called routers, which are mentioned above. Routers come in all shapes and sizes. Some, normally located at the periphery of the internet such as those in a home or small business, are known as edge routers. Others are service provider equipment with varying capabilities, from modest performance border routers to high performance core routers. These routers are interconnected, moving data across the Internet in a way that increases the probability of successful transit. There are two types of routing schemes. Virtual circuit routing reserves resources over a fixed path between two end-systems. Packet routing operates in a way whereby individual packets of data may take different paths through the systems that interconnect end-systems. The internet layer also supports specialized data services, such as multicast, broadcast, and anycast routing.

Routers and end systems connect to each other through the Link layer. This layer may comprise a physical channel or a complex networking infrastructure. Both are commonly deployed options.

Physical channels encode data utilizing various techniques, thereby providing the basic data transmission service between directly connected equipment. There are a wide variety of physical channels, each utilizing its own data encoding scheme. Examples of physical channels used in the Internet are Ethernet-family protocols over copper or optical cable, wireless local area networks, such as those used in Wi-Fi, also known as 802.11, as well as in cellular telephony; and wireless point-to-point radio. Since physical channels may introduce communications errors and generally do not provide flow control, the link-layer may provide services that correct most errors and also implement flow control. The characteristics of the physical channel may vary widely from the fairly reliable ethernet, less reliable wireless channels, to the very unreliable deep space radio channels. Consequently, each type of physical channel may require a different link-layer protocol to accommodate its characteristics. For example, Ethernet channels provide only error detection and no flow control services. Low to moderate data rate serial channels, on the other hand, may provide acknowledgment based error and flow control.

When the link layer comprises networking infrastructure, it implements a technique known as protocol encapsulation. This scheme encapsulates the packets of the internet layer inside packets of the link layer network. Common examples are carrying internet traffic over an Multi-Protocol Label Switching (MPLS) or Asynchronous Transfer Mode (ATM) virtual circuit services, or over "Ethernet" (IEEE 802.3, IEEE 802.11, IEEE 802.16, etc.) Sometimes it is useful to encapsulate internet packets inside other internet packets. For example, a private intranet may wish to interconnect several isolated sites using the services of the public internet. It protects its internet packets with a suitable security protocol, such as IPSec and places them inside the internet packets of the public network, which moves them between these isolated sites.

The Internet utilizes not only technology acting within a single layer of its protocol architecture, but also mechanisms that are spread over several protocol layers. As mentioned previously, routing is one such technology using application services to move routing data to routers in order to provide the network-layer routing service. Another example is the provision of network security within the Internet. For example, providing Transport Layer Security requires the encryption of packets at end-systems This requires encryption keys that are distributed by a logically separate application. Network management may utilize an application layer protocol, such as the Simple Network Management Protocol (SNMP) in concert with a network-layer protocol, such as the Internet Message Control Protocol (ICMP).

Professional societies and organizations

(See External Links subpage for website homepages)

References

  1. Comer, Douglas E. (2009). Computer Networks and Internets. Upper Saddle River, NJ isbn = 978-0-13-606127-3: Pearson Prentice-Hall. 
  2. Okin, J. R. (2005), The Information Revolution: The Not-for-dummies Guide to the History, Technology, And Use of the World Wide Web, Winter Harbor, ME: Ironbound Press, ISBN 0-9763857-4-0
  3. 3.0 3.1 RFC1958: Architectural Principles of the Internet. Internet Engineering Task Force (June 1996). Retrieved on Sept. 17, 2009.
  4. RFC1812: Requirements for IP Version 4 Routers. Internet Engineering Task Force (Dec. 1, 2006). Retrieved on Sept. 17, 2009.
  5. Cisco Visual Networking Index:Forecast and Methodology, 2008–2013. Cisco Systems, Inc. (June 9, 2009). Retrieved on Sept. 16, 2009.
  6. Distributed Computing: An Introduction. ExtremeTech. Retrieved on 16 Sept., 2009.