Do the downloads!! Share!! The diffusion of very important information and knowledge is essential for the world progress always!! Thanks!!
- – > Mestrado – Dissertation – Tabelas, Figuras e Gráficos – Tables, Figures and Graphics – ´´My´´ Dissertation @ #Innovation #energy #life #health #Countries #Time #Researches #Reference #Graphics #Ages #Age #Mice #People #Person #Mouse #Genetics #PersonalizedMedicine #Diagnosis #Prognosis #Treatment #Disease #UnknownDiseases #Future #VeryEfficientDrugs #VeryEfficientVaccines #VeryEfficientTherapeuticalSubstances #Tests #Laboratories #Investments #Details #HumanLongevity #DNA #Cell #Memory #Physiology #Nanomedicine #Nanotechnology #Biochemistry #NewMedicalDevices #GeneticEngineering #Internet #History #Science #World
The influence of physical activity in the progression of experimental lung cancer in mice
- PMID: 22683274
- DOI: 10.1016/j.prp.2012.04.006
GRUPO_AF1 – GROUP AFA1 – Aerobic Physical Activity – Atividade Física Aeróbia – ´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto
GRUPO AFAN 1 – GROUP AFAN1 – Anaerobic Physical Activity – Atividade Física Anaeróbia – ´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto
GRUPO_AF2 – GROUP AFA2 – Aerobic Physical Activity – Atividade Física Aeróbia – ´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto
GRUPO AFAN 2 – GROUP AFAN 2 – Anaerobic Physical Activity – Atividade Física Anaeróbia – ´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto
Slides – mestrado – ´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto
DMBA CARCINOGEN IN EXPERIMENTAL MODELS
Avaliação da influência da atividade física aeróbia e anaeróbia na progressão do câncer de pulmão experimental – Summary – Resumo – ´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto
Lung cancer is one of the most incident neoplasms in the world, representing the main cause of mortality for cancer. Many epidemiologic studies have suggested that physical activity may reduce the risk of lung cancer, other works evaluate the effectiveness of the use of the physical activity in the suppression, remission and reduction of the recurrence of tumors. The aim of this study was to evaluate the effects of aerobic and anaerobic physical activity in the development and the progression of lung cancer. Lung tumors were induced with a dose of 3mg of urethane/kg, in 67 male Balb – C type mice, divided in three groups: group 1_24 mice treated with urethane and without physical activity; group 2_25 mice with urethane and subjected to aerobic swimming free exercise; group 3_18 mice with urethane, subjected to anaerobic swimming exercise with gradual loading 5-20% of body weight. All the animals were sacrificed after 20 weeks, and lung lesions were analyzed. The median number of lesions (nodules and hyperplasia) was 3.0 for group 1, 2.0 for group 2 and 1.5-3 (p=0.052). When comparing only the presence or absence of lesion, there was a decrease in the number of lesions in group 3 as compared with group 1 (p=0.03) but not in relation to group 2. There were no metastases or other changes in other organs. The anaerobic physical activity, but not aerobic, diminishes the incidence of experimental lung tumors.
Copyright © 2012 Elsevier GmbH. All rights reserved.
http://www.instagram.com http://www.twitter.com http://www.facebook.com http://www.linkedin.com http://www.forbes.com http://www.harvard.edu http://www.mit.edu http://www.nasa.gov http://www.nobelprize.org http://www.google.comwww.yahoo.com http://www.youtube.com http://www.gmail.com https://www.usa.gov/ https://www.gov.br/pt-br http://www.stanford.edu http://www.caltech.edu
Meu Canal do YouTube: https://www.youtube.com/channel/UC9gsWVbGYWO04iYO2TMrP8Q
Gratidão: Convites p/ eu participar de eventos científicos muito importantes do mundo em pouco tempo
Link of my dissertation: https://science1984.wordpress.com/2018/07/15/i-did-very-important-detailed-and-innovative-graphics-about-variations-of-all-mice-weigths-during-all-exerimental-time-my-dissertation-they-can-be-an-excelent-reference-for-future-researches-like-2/
My Curriculum Lattes: http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4240145A2
Video – Gratitude: I am very grateful because I was invited by Internet through direct messages to participate in 55 very important science events in the world in 25 cities in less than 1 year. I participated of very important researches in Brazil. Informations about it are in my blog.
Microsoft + The Jackson Laboratory: Using AI to fight cancer
1.857 visualizações•27 de out. de 2019
Microsoft625 mil inscritos There are an astounding 4,000 scientific articles published every day that can help in the fight against cancer. Yet with so many, it’s nearly impossible for experts to have the most up-to-date and relevant information needed to study and treat it. In collaboration with Microsoft, The Jackson Laboratory is using AI to empower researchers and medical professionals with a powerful digital encyclopedia that has the potential to change the way we study and treat cancer. Learn more: http://msft.social/5fTcsC Audio Description Version: https://youtu.be/ptvny6C3QQU Subscribe to Microsoft on YouTube here: https://aka.ms/SubscribeToYouTube Follow us on social: LinkedIn: https://www.linkedin.com/company/micr… Twitter: https://twitter.com/Microsoft Facebook: https://www.facebook.com/Microsoft/ Instagram: https://www.instagram.com/microsoft/ For more about Microsoft, our technology, and our mission, visit https://aka.ms/microsoftstories
- All Microsoft
Cancer researchers embrace AI to accelerate development of precision medicine
October 27, 2019 | John Roach
- Share on LinkedIn (opens new window)
- Share on Twitter (opens new window)
- Share on Facebook (opens new window)
- Share on Flipboard (opens new window)
- Share on Reddit (opens new window)
Biomedical researchers are embracing artificial intelligence to accelerate the implementation of cancer treatments that target patients’ specific genomic profiles, a type of precision medicine that in some cases is more effective than traditional chemotherapy and has fewer side effects.
The potential for this new era of cancer treatment stems from advances in genome sequencing technology that enables researchers to more efficiently discover the specific genomic mutations that drive cancer, and an explosion of research on the development of new drugs that target those mutations.
To harness this potential, researchers at The Jackson Laboratory, an independent, nonprofit biomedical research institution also known as JAX and headquartered in Bar Harbor, Maine, developed a tool to help the global medical and scientific communities stay on top of the continuously growing volume of data generated by advances in genomic research.
The tool, called the Clinical Knowledgebase, or CKB, is a searchable database where subject matter experts store, sort and interpret complex genomic data to improve patient outcomes and share information about clinical trials and treatment options.
The challenge is to find the most relevant cancer-related information from the 4,000 or so biomedical research papers published each day, according to Susan Mockus, the associate director of clinical genomic market development with JAX’s genomic medicine institute in Farmington, Connecticut.
“Because there is so much data and so many complexities, without embracing and incorporating artificial intelligence and machine learning to help in the interpretation of the data, progress will be slow,” she said.
That’s why Mockus and her colleagues at JAX are collaborating with computer scientists working on Microsoft’s Project Hanover who are developing AI technology that enables machines to read complex medical and research documents and highlight the important information they contain.
While this machine reading technology is in the early stages of development, researchers have found they can make progress by narrowing the focus to specific areas such as clinical oncology, explained Peter Lee, corporate vice president of Microsoft Healthcare in Redmond, Washington.
“For something that really matters like cancer treatment where there are thousands of new research papers being published every day, we actually have a shot at having the machine read them all and help a board of cancer specialists answer questions about the latest research,” he said.
Mockus and her colleagues are using Microsoft’s machine reading technology to curate CKB, which stores structured information about genomic mutations that drive cancer, drugs that target cancer genes and the response of patients to those drugs.
One application of this knowledgebase allows oncologists to discover what, if any, matches exist between a patient’s known cancer-related genomic mutations and drugs that target them as they explore and weigh options for treatment, including enrollment in clinical trials for drugs in development.
This information is also useful to translational and clinical researchers, Mockus noted.
The bottleneck is filtering through the more than 4,000 papers published every day in biomedical journals to find the subset of about 200 related to cancer, read them and update CKB with the relevant information on the mutation, drug and patient response.
“What you want is some degree of intelligence incorporated into the system that can go out and not just be efficient, but also be effective and relevant in terms of how it can filter information. That is what Hanover has done,” said Auro Nair, executive vice president of JAX.
The core of Microsoft’s Project Hanover is the capability to comb through the thousands of documents published each day in the biomedical literature and flag and rank all that are potentially relevant to cancer researchers, highlighting, for example, information on gene, mutation, drug and patient response.
Human curators working on CKB are then free to focus on the flagged research papers, validating the accuracy of the highlighted information.
“Our goal is to make the human curators superpowered,” said Hoifung Poon, director of precision health natural language processing with Microsoft’s research organization in Redmond and the lead researcher on Project Hanover.
“With the machine reader, we are able to suggest that this might be a case where a paper is talking about a drug-gene mutation relation that you care about,” Poon explained. “The curator can look at this in context and, in a couple of minutes, say, ‘This is exactly what I want,’ or ‘This is incorrect.’”
To be successful, Poon and his team need to train machine learning models in such a way that they catch all the potentially relevant information – ensure there are no gaps in content – and, at the same time, weed out irrelevant information sufficiently to make the curation process more efficient.
In traditional machine reading tasks such as finding information about celebrities in news stories, researchers tend to focus on relationships contained within a single sentence, such as a celebrity name and a new movie.
Since this type of information is widespread across news stories, researchers can skip instances that are more challenging such as when the name of the celebrity and movie are mentioned in separate paragraphs, or when the relationship involves more than two pieces of information.
“In biomedicine, you can’t do that because your latest finding may only appear in this single paper and if you skip it, it could be life or death for this patient,” explained Poon. “In this case, you have to tackle some of the hard linguistic challenges head on.”
Poon and his team are taking what they call a self-supervision approach to machine learning in which the model automatically annotates training examples from unlabeled text by leveraging prior knowledge in existing databases and ontologies.
For example, a National Cancer Institute initiative manually compiled information from the biomedical literature on how genes regulate each other but was unable to sustain the effort beyond two years. Poon’s team used the compiled knowledge to automatically label documents and train a machine reader to find new instances of gene regulation.
They took the same approach with public datasets on approved cancer drugs and drugs in clinical trials, among other sources.
This connect-the-dots approach creates a machine learned model that “rarely misses anything” and is precise enough “where we can potentially improve the curation efficiency by a lot,” said Poon.
Collaboration with JAX
The collaboration with JAX allows Poon and his team to validate the effectiveness of Microsoft’s machine reading technology while increasing the efficiency of Mockus and her team as they curate CKB.
“Leveraging the machine reader, we can say here is what we are interested in and it will help to triage and actually rank papers for us that have high clinical significance,” Mockus said. “And then a human goes in to really tease apart the data.”
Over time, feedback from the curators will be used to help train the machine reading technology, making the models more precise and, in turn, making the curators more efficient and allowing the scope of CKB to expand.
“We feel really, really good about this relationship,” said Nair. “Particularly from the standpoint of the impact it can have in providing a very powerful tool to clinicians.”
- Learn more about the Clinical Knowledgebase and The Jackson Laboratory
- Learn more about Project Hanover
- Read: How Microsoft computer scientists and researchers are working to ‘solve’ cancer
- Read: Microsoft announces general availability of cloud-based tools for genomics research
John Roach writes about Microsoft research and innovation. Follow him on Twitter.
Oct 17, 2019 | Jennifer Langston
Oct 10, 2019 | John Roach
Oct 10, 2019 | GeekWire
Oct 10, 2019 | John Roach
Share this page:
Footer Resource links
- Account profile
- Download Center
- Microsoft Store support
- Order tracking
- Store locations
- Buy online, pick up in store
- In-store events
- Microsoft in education
- Office for students
- Office 365 for schools
- Deals for students & parents
- Microsoft Azure in education
- Microsoft Visual Studio
- Windows Dev Center
- Developer Network
- Microsoft developer program
- Channel 9
- Office Dev Center
- Microsoft Garage
- About Microsoft
- Company news
- Privacy at Microsoft
- Diversity and inclusion
History of the Internet
|An Opte Project visualization of routing paths through a portion of the Internet|
|General[hide]AccessCensorshipDemocracyDigital divideDigital rightsFreedom of informationHistory of the InternetInternet phenomenaNet neutralityOldest domain namesPioneersPrivacySociologyUsage|
The history of the Internet has its origin in the efforts of wide area networking that originated in several computer science laboratories in the United States, United Kingdom, and France. The U.S. Department of Defense awarded contracts as early as the 1960s, including for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. The first message was sent over the ARPANET in 1969 from computer science Professor Leonard Kleinrock‘s laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research Institute (SRI).
Packet switching networks such as the NPL network, ARPANET, Merit Network, CYCLADES, and Telenet, were developed in the late 1960s and early 1970s using a variety of communications protocols. Donald Davies first demonstrated packet switching in 1967 at the National Physics Laboratory (NPL) in the UK, which became a testbed for UK research for almost two decades. The ARPANET project led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks. The design included concepts from the French CYCLADES project directed by Louis Pouzin.
In the early 1980s the NSF funded the establishment for national supercomputing centers at several universities, and provided interconnectivity in 1986 with the NSFNET project, which also created network access to the supercomputer sites in the United States from research and education organizations. Commercial Internet service providers (ISPs) began to emerge in the very late 1980s. The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990, and the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.
In the 1980s, research at CERN in Switzerland by British computer scientist Tim Berners-Lee resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network. Since the mid-1990s, the Internet has had a revolutionary impact on culture, commerce, and technology, including the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites. The research and education community continues to develop and use advanced networks such as JANET in the United Kingdom and Internet2 in the United States. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet’s takeover of the global communication landscape was almost instant in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, already 51% by 2000, and more than 97% of the telecommunicated information by 2007. Today the Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking. However, the future of the global internet may be shaped by regional differences in the world.
|Internet history timeline|
|Early research and development:1963: ARPANET concepts developed1965: NPL network planning1966: Merit Network founded1967: NPL network packet switching pilot experiment1969: ARPANET carries its first packets1970: Network Information Center (NIC)1971: Tymnet switched-circuit network1972: Merit Network’s packet-switched network operational1972: Internet Assigned Numbers Authority (IANA) established1973: CYCLADES network demonstrated1974: Transmission Control Program specification published1974: Telenet commercial packet-switched network1976: X.25 protocol approved1978: Minitel introduced1979: Internet Activities Board (IAB)1980: USENET news using UUCP1980: Ethernet standard introduced1981: BITNET establishedMerging the networks and creating the Internet:1981: Computer Science Network (CSNET)1982: TCP/IP protocol suite formalized1982: Simple Mail Transfer Protocol (SMTP)1983: Domain Name System (DNS)1983: MILNET split off from ARPANET1985: First .COM domain name registered1986: NSFNET with 56 kbit/s links1986: Internet Engineering Task Force (IETF)1987: UUNET founded1988: NSFNET upgraded to 1.5 Mbit/s (T1)1988: OSI Reference Model released1988: Morris worm1989: Border Gateway Protocol (BGP)1989: PSINet founded, allows commercial traffic1989: Federal Internet Exchanges (FIXes)1990: GOSIP (without TCP/IP)1990: ARPANET decommissioned1990: Advanced Network and Services (ANS)1990: UUNET/Alternet allows commercial traffic1990: Archie search engine1991: Wide area information server (WAIS)1991: Gopher1991: Commercial Internet eXchange (CIX)1991: ANS CO+RE allows commercial traffic1991: World Wide Web (WWW)1992: NSFNET upgraded to 45 Mbit/s (T3)1992: Internet Society (ISOC) established1993: Classless Inter-Domain Routing (CIDR)1993: InterNIC established1993: AOL added USENET access1993: Mosaic web browser released1994: Full text web search engines1994: North American Network Operators’ Group (NANOG) establishedCommercialization, privatization, broader access leads to the modern Internet:1995: New Internet architecture with commercial ISPs connected at NAPs1995: NSFNET decommissioned1995: GOSIP updated to allow TCP/IP1995: very high-speed Backbone Network Service (vBNS)1995: IPv6 proposed1996: AOL changes pricing model from hourly to monthly1998: Internet Corporation for Assigned Names and Numbers (ICANN)1999: IEEE 802.11b wireless networking1999: Internet2/Abilene Network1999: vBNS+ allows broader access2000: Dot-com bubble bursts2001: New top-level domain names activated2001: Code Red I, Code Red II, and Nimda worms2003: UN World Summit on the Information Society (WSIS) phase I2003: National LambdaRail founded2004: UN Working Group on Internet Governance (WGIG)2005: UN WSIS phase II2006: First meeting of the Internet Governance Forum2010: First internationalized country code top-level domains registered2012: ICANN begins accepting applications for new generic top-level domain names2013: Montevideo Statement on the Future of Internet Cooperation2014: NetMundial international Internet governance proposal2016: ICANN contract with U.S. Dept. of Commerce ends, IANA oversight passes to the global Internet community on October 1stExamples of Internet services:1989: AOL dial-up service provider, email, instant messaging, and web browser1990: IMDb Internet movie database1994: Yahoo! web directory1995: Amazon.com online retailer1995: eBay online auction and shopping1995: Craigslist classified advertisements1996: Hotmail free web-based e-mail1996: RankDex search engine1997: Google Search1997: Babel Fish automatic translation1998: Yahoo! Clubs (now Yahoo! Groups)1998: PayPal Internet payment system1998: Rotten Tomatoes review aggregator1999: 2ch Anonymous textboard1999: i-mode mobile internet service1999: Napster peer-to-peer file sharing2000: Baidu search engine2001: 2chan Anonymous imageboard2001: BitTorrent peer-to-peer file sharing2001: Wikipedia, the free encyclopedia2003: LinkedIn business networking2003: Myspace social networking site2003: Skype Internet voice calls2003: iTunes Store2003: 4chan Anonymous imageboard2003: The Pirate Bay, torrent file host2004: Facebook social networking site2004: Podcast media file series2004: Flickr image hosting2005: YouTube video sharing2005: Reddit link voting2005: Google Earth virtual globe2006: Twitter microblogging2007: WikiLeaks anonymous news and information leaks2007: Google Street View2007: Kindle, e-reader and virtual bookshop2008: Amazon Elastic Compute Cloud (EC2)2008: Dropbox cloud-based file hosting2008: Encyclopedia of Life, a collaborative encyclopedia intended to document all living species2008: Spotify, a DRM-based music streaming service2009: Bing search engine2009: Google Docs, Web-based word processor, spreadsheet, presentation, form, and data storage service2009: Kickstarter, a threshold pledge system2009: Bitcoin, a digital currency2010: Instagram, photo sharing and social networking2011: Google+, social networking2011: Snapchat, photo sharing2012: Coursera, massive open online courses|
- 2Development of wide area networking
- 3Networks that led to the Internet
- 4Merging the networks and creating the Internet (1973–95)
- 4.2From ARPANET to NSFNET
- 4.3Transition towards the Internet
- 4.4TCP/IP goes global (1980s)
- 4.5Rise of the global Internet (late 1980s/early 1990s onward)
- 4.6Networking in outer space
- 5Internet governance
- 6Politicization of the Internet
- 7Use and culture
- 8Web technologies
- 10See also
- 12Further reading
- 13External links
The concept of data communication – transmitting data between two different places through an electromagnetic medium such as radio or an electric wire – pre-dates the introduction of the first computers. Such communication systems were typically limited to point to point communication between two end devices. Semaphore lines, telegraph systems and telex machines can be considered early precursors of this kind of communication. The Telegraph in the late 19th century was the first fully digital communication system.
Early computers had a central processing unit and remote terminals. As the technology evolved, new systems were devised to allow communication over longer distances (for terminals) or with higher speed (for interconnection of local devices) that were necessary for the mainframe computer model. These technologies made it possible to exchange data (such as files) between remote computers. However, the point-to-point communication model was limited, as it did not allow for direct communication between any two arbitrary systems; a physical link was necessary. The technology was also considered unsafe for strategic and military use because there were no alternative paths for the communication in case of an enemy attack.
Fundamental theoretical work in data transmission and information theory was developed by Claude Shannon, Harry Nyquist, and Ralph Hartley in the early 20th century. Information theory, as enunciated by Shannon in 1948, provided a firm theoretical underpinning to understand the trade-offs between signal-to-noise ratio, bandwidth, and error-free transmission in the presence of noise, in telecommunications technology.
The development of transistor technology was fundamental to a new generation of electronic devices that later effected almost every aspect of the human experience. The long-sought realization of the field-effect transistor, in form of the MOS transistor (MOSFET), by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959, brought new opportunities for miniaturization and mass-production for a wide range of uses. It became the basic building block of the information revolution and the information age, and laid the foundation for power electronic technology that later enabled the development of wireless Internet technology. Network bandwidth has been doubling every 18 months since the 1970s, which found expression in Edholm’s law, similar to the scaling expressed by Moore’s law for semiconductors.
Development of wide area networking
With limited exceptions, the earliest computers were connected directly to terminals used by individual users, typically in the same building or site.
Wide area networks (WANs) emerged during the 1950s and became established during the 1960s.
A network of such centers, connected to one another by wide-band communication lines […] the functions of present-day libraries together with anticipated advances in information storage and retrieval and symbiotic functions suggested earlier in this paper
In August 1962, Licklider and Welden Clark published the paper “On-Line Man-Computer Communication” which was one of the first descriptions of a networked future.
In October 1962, Licklider was hired by Jack Ruina as director of the newly established Information Processing Techniques Office (IPTO) within DARPA, with a mandate to interconnect the United States Department of Defense‘s main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos describing a distributed network to the IPTO staff, whom he called “Members and Affiliates of the Intergalactic Computer Network“. As part of the information processing office’s role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at University of California, Berkeley, and one for the Compatible Time-Sharing System project at Massachusetts Institute of Technology (MIT). Licklider’s identified need for inter-networking would become obvious by the apparent waste of resources this caused.
For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them….
I said, oh man, it’s obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet.
Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors, Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.
Development of packet switching
Main article: Packet switching
The issue of connecting separate physical networks to form one logical network was the first of many problems. Early networks used message switched systems that required rigid routing structures prone to single point of failure. In the 1960s, Paul Baran of the RAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war. Information transmitted across Baran’s network would be divided into what he called “message blocks”. Independently, Donald Davies (National Physical Laboratory, UK), proposed and was the first to put into practice a local area network based on what he called packet switching, the term that would ultimately be adopted. Larry Roberts applied Davies’ concepts of packet switching for the ARPANET wide area network, and sought input from Paul Baran and Leonard Kleinrock. Kleinrock subsequently developed the mathematical theory behind the performance of this technology building on his earlier work on queueing theory.
Packet switching is a rapid store and forward networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. It provides better bandwidth utilization and response times than the traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links.
The software for establishing links between network sites in the ARPANET was the Network Control Program (NCP), completed in c. 1970. Further development in the early 1970s by Robert E. Kahn and Vint Cerf let to the formulation of the Transmission Control Program, and its specification in December 1974 in RFC 675. This work also coined the terms catenet (concatenated network) and internet as a contraction of internetworking, which describe the interconnection of multiple networks. This software was monolithic in design using two simplex communication channels for each user session. The software was redesigned as a modular protocol stack, using full-duplex channels. Originally named IP/TCP it was installed in the ARPANET for production use in January 1983.
Networks that led to the Internet
Main article: NPL network
Following discussions with J. C. R. Licklider, Donald Davies became interested in data communications for computer networks. At the National Physical Laboratory (United Kingdom) in 1965, Davies designed and proposed a national data network based on packet switching. The following year, he described the use of an “Interface computer” to act as a router. The proposal was not taken up nationally but by 1967, a pilot experiment had demonstrated the feasibility of packet switched networks.
By 1969 he had begun building the Mark I packet-switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions. In 1976, 12 computers and 75 terminal devices were attached, and more were added until the network was replaced in 1986. NPL, followed by ARPANET, were the first two networks in the world to use packet switching, and were interconnected in the early 1970s.
Robert Taylor was promoted to the head of the information processing office at Defense Advanced Research Projects Agency (DARPA) in June 1966. He intended to realize Licklider’s ideas of an interconnected networking system. Bringing in Larry Roberts from MIT, he initiated a project to build such a network. The first ARPANET link was established between the University of California, Los Angeles (UCLA) and the Stanford Research Institute at 22:30 hours on October 29, 1969.
“We set up a telephone connection between us and the guys at SRI …”, Kleinrock … said in an interview: “We typed the L and we asked on the phone,”Do you see the L?””Yes, we see the L,” came the response.We typed the O, and we asked, “Do you see the O.””Yes, we see the O.”Then we typed the G, and the system crashed …
Yet a revolution had begun” ….
35 Years of the Internet, 1969–2004. Stamp of Azerbaijan, 2004.
By December 5, 1969, a 4-node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.
ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled “Host Software”, was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. The early ARPANET used the Network Control Program (NCP, sometimes Network Control Protocol) rather than TCP/IP. On January 1, 1983, known as flag day, NCP on the ARPANET was replaced by the more flexible and powerful family of TCP/IP protocols, marking the start of the modern Internet.
International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter Kirstein‘s research group in the UK, initially at the Institute of Computer Science, London University and later at University College London.
The Merit Network was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan’s public universities as a means to help the state’s educational and economic development. With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network. All of this set the stage for Merit’s role in the NSFNET project starting in the mid-1980s.
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the early ARPANET design and to support network research generally. It was the first network to make the hosts responsible for reliable delivery of data, rather than the network itself, using unreliable datagrams and associated end-to-end protocol mechanisms. Concepts of this network influenced later ARPANET architecture.
X.25 and public data networks
Based on ARPA’s research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X.25 and related standards. While using packet switching, X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976.
The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.
Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.
The first public dial-in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.
UUCP and Usenet
In 1979, two students at Duke University, Tom Truscott and Jim Ellis, originated the idea of using Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies compared to later networks like CSNET and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. – Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network represented possibly one of the first examples of the Internet technology becoming progress through popular diffusion.
Merging the networks and creating the Internet (1973–95)
Map of the TCP/IP test network in February 1982
With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmermann, Gerard LeLann, Louis Pouzin (designer of the CYCLADES network), and his graduate students Judy Estrin, Richard Karp, Yogen Dalal and Carl Sunshine with important work on this design. This Stanford research team became known as the International Network Working Group, formed in 1973 and led by Cerf.
The specification of the resulting protocol, the Transmission Control Protocol (TCP), was published as RFC 675 by the Network Working Group in December 1974. It contains the first attested use of the term internet, as a shorthand for internetworking.
Between 1976 and 1977, Yogen Dalal proposed separating TCP’s routing and transmission control functions into two discrete layers, which led to the splitting of TCP into the TCP and IP protocols, and the development of TCP/IP.
With the role of the network reduced to a core of functionality, it became possible to exchange traffic with other network independently from their detailed characteristics, thereby solving Kahn’s initial problem. DARPA agreed to fund development of prototype software, and after several years of work, the first demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977 a three network demonstration was conducted including the ARPANET, the SRI’s Packet Radio Van on the Packet Radio Network and the Atlantic Packet Satellite network.
Stemming from the first specifications of TCP in 1974, TCP/IP emerged in 1978 in nearly its final form, as used for the first decades of the Internet. which is described in IETF publication RFC 791 (September 1981).
Decomposition of the quad-dotted IPv4 address representation to its binary value
IPv4 uses 32-bit addresses which limits the address space to 232 addresses, i.e. 4294967296 addresses. The last available IPv4 address was assigned in January 2011. IPv4 is being replaced by its successor, called “IPv6“, which uses 128 bit addresses, providing 2128 addresses, i.e. 340282366920938463463374607431768211456. This is a vastly increased address space. The shift to IPv6 is expected to take many years, decades, or perhaps longer, to complete, since there were four billion machines with IPv4 when the shift began.
The associated standards for IPv4 were published by 1981 as RFCs 791, 792 and 793, and adopted for use. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP/IP. On January 1, 1983, known as flag day, TCP/IP protocols became the standard for the ARPANET, replacing the earlier NCP protocol.
From ARPANET to NSFNET
BBN Technologies TCP/IP Internet map of early 1986.
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA’s primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.
The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were.
Several other branches of the U.S. government, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid-1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.
T3 NSFNET Backbone, c. 1992
NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.
In 1981 NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange.
In 1986, the NSF created NSFNET, a 56 kbit/s backbone to support the NSF-sponsored supercomputing centers. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks. The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a cooperative agreement with the Merit Network in partnership with IBM, MCI, and the State of Michigan. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990. NSFNET was expanded and upgraded to 45 Mbit/s in 1991, and was decommissioned in 1995 when it was replaced by backbones operated by several commercial Internet Service Providers.
Transition towards the Internet
The term “internet” was adopted in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974) as an abbreviation of the term internetworking and the two terms were used interchangeably. In general, an internet was any network using TCP/IP. It was around the time when ARPANET was interlinked with NSFNET in the late 1980s, that the term was used as the name of the network, Internet, being the large and global TCP/IP network.
As interest in networking grew and new applications for it were developed, the Internet’s technologies spread throughout the rest of the world. The network-agnostic approach in TCP/IP meant that it was easy to use any existing network infrastructure, such as the IPSS X.25 network, to carry Internet traffic. In 1982, one year earlier than ARPANET, University College London replaced its transatlantic satellite links with TCP/IP over IPSS.
Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access to File Transfer Protocol (FTP) sites via UUCP or mail.
Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. The Exterior Gateway Protocol (EGP) was replaced by a new protocol, the Border Gateway Protocol (BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing (CIDR) was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables.
TCP/IP goes global (1980s)
CERN, the European Internet, the link to the Pacific and beyond
Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self-developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP intranets remained isolated from the Internet until 1989.
In 1988, Daniel Karrenberg, from Centrum Wiskunde & Informatica (CWI) in Amsterdam, visited Ben Segal, CERN‘s TCP/IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X.25 links) over to TCP/IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.
At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors’ Committee and provided a dedicated IP based network for Australia.
The Internet began to penetrate Asia in the 1980s. In May 1982 South Korea became the second country to successfully set up TCP/IP IPv4 network. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNET in 1989. It hosted the annual meeting of the Internet Society, INET’92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.
The early global “digital divide” emerges
While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.
At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.
In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa’s first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail’s Kampala offices directly to NSN’s MAE-West point of presence using a private network from NSN’s leased ground station in New Jersey. InfoCom’s first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems.
In 1996, a USAID funded project, the Leland Initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Ivory Coast and Benin in 1998.
Africa is building an Internet infrastructure. AFRINIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.
There are many programs to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa’s Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.
Asia and Oceania
The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).
South Korea’s first Internet system, the System Development Network (SDN) began operation on 15 May 1982. SDN was connected to the rest of the world in August 1983 using UUCP (Unixto-Unix-Copy); connected to CSNET in December 1984; and formally connected to the U.S. Internet in 1990.
In 1991, the People’s Republic of China saw its first TCP/IP college network, Tsinghua University‘s TUNET. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University‘s Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.
As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.
Rise of the global Internet (late 1980s/early 1990s onward)
Main article: Digital revolution
Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use.
However, interest in commercial use of the Internet quickly became a commonly debated topic. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections. (Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.)
As a result, during the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first commercial dialup ISP in the United States was The World, which opened in 1989.
In 1992, the U.S. Congress passed the Scientific and Advanced-Technology Act, 42 U.S.C. § 1862(g), which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks. This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations.
By 1990, ARPANET’s goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point of the Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995 when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service and the service ended. NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States.
World Wide Web and introduction of browsers
The World Wide Web (sometimes abbreviated “www” or “W3”) is an information space where documents and other web resources are identified by URIs, interlinked by hypertext links, and can be accessed via the Internet using a web browser and (more recently) web-based applications. It has become known simply as “the Web”. As of the 2010s, the World Wide Web is the primary tool billions use to interact on the Internet, and it has changed people’s lives immeasurably.
Precursors to the web browser emerged in the form of hyperlinked applications during the mid and late 1980s (the bare concept of hyperlinking had by then existed for some decades). Following these, Tim Berners-Lee is credited with inventing the World Wide Web in 1989 and developing in 1990 both the first web server, and the first web browser, called WorldWideWeb (no spaces) and later renamed Nexus. Many others were soon developed, with Marc Andreessen‘s 1993 Mosaic (later Netscape), being particularly easy to use and install, and often credited with sparking the internet boom of the 1990s. Today, the major web browsers are Firefox, Internet Explorer, Google Chrome, Opera and Safari.
A boost in web users was triggered in September 1993 by NCSA Mosaic, a graphical browser which eventually ran on several popular office and home computers. This was the first web browser aiming to bring multimedia content to non-technical users, and therefore included images and text on the same page, unlike previous browser designs; its founder, Marc Andreessen, also established the company that in 1994, released Netscape Navigator, which resulted in one of the early browser wars, when it ended up in a competition for dominance (which it lost) with Microsoft Windows‘ Internet Explorer. Commercial use restrictions were lifted in 1995. The online service America Online (AOL) offered their users a connection to the Internet via their own internal browser.
Use in wider society 1990s to early 2000s (Web 1.0)
The Internet was widely used for mailing lists, emails, e-commerce and early popular online shopping (Amazon and eBay for example), online forums and bulletin boards, and personal websites and blogs, and use was growing rapidly, but by more modern standards the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society’s infrastructure.
Typical design elements of these “Web 1.0” era websites included: Static pages instead of dynamic HTML; content served from filesystems instead of relational databases; pages built using Server Side Includes or CGI instead of a web application written in a dynamic programming language; HTML 3.2-era structures such as frames and tables to create page layouts; online guestbooks; overuse of GIF buttons and similar small graphics promoting particular items; and HTML forms sent via email. (Support for server side scripting was rare on shared servers so the usual feedback mechanism was via email, using mailto forms and their email program.
During the period 1997 to 2001, the first speculative investment bubble related to the Internet took place, in which “dot-com” companies (referring to the “.com” top level domain used by businesses) were propelled to exceedingly high valuations as investors rapidly stoked stock values, followed by a market crash; the first dot-com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow.
The changes that would propel the Internet into its place as a social system took place during a relatively short period of no more than five years, starting from around 2004. They included:
- The call to “Web 2.0” in 2004 (first suggested in 1999),
- Accelerating adoption and commoditization among households of, and familiarity with, the necessary hardware (such as computers).
- Accelerating storage technology and data access speeds – hard drives emerged, took over from far smaller, slower floppy discs, and grew from megabytes to gigabytes (and by around 2010, terabytes), RAM from hundreds of kilobytes to gigabytes as typical amounts on a system, and Ethernet, the enabling technology for TCP/IP, moved from common speeds of kilobits to tens of megabits per second, to gigabits per second.
- High speed Internet and wider coverage of data connections, at lower prices, allowing larger traffic rates, more reliable simpler traffic, and traffic from more locations,
- The gradually accelerating perception of the ability of computers to create new means and approaches to communication, the emergence of social media and websites such as Twitter and Facebook to their later prominence, and global collaborations such as Wikipedia (which existed before but gained prominence as a result),
and shortly after (approximately 2007–2008 onward):
- The mobile revolution, which provided access to the Internet to much of human society of all ages, in their daily lives, and allowed them to share, discuss, and continually update, inquire, and respond.
- Non-volatile RAM rapidly grew in size and reliability, and decreased in price, becoming a commodity capable of enabling high levels of computing activity on these small handheld devices as well as solid-state drives (SSD).
- An emphasis on power efficient processor and device design, rather than purely high processing power; one of the beneficiaries of this was ARM, a British company which had focused since the 1980s on powerful but low cost simple microprocessors. ARM architecture rapidly gained dominance in the market for mobile and embedded devices.
With the call to Web 2.0, the period up to around 2004–2005 was retrospectively named and described by some as Web 1.0.
The term “Web 2.0” describes websites that emphasize user-generated content (including user-to-user interaction), usability, and interoperability. It first appeared in a January 1999 article called “Fragmented Future” written by Darcy DiNucci, a consultant on electronic information design, where she wrote:“The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will […] appear on your computer screen, […] on your TV set […] your car dashboard […] your cell phone […] hand-held game machines […] maybe even your microwave oven.”
The term resurfaced during 2002 – 2004, and gained prominence in late 2004 following presentations by Tim O’Reilly and Dale Dougherty at the first Web 2.0 Conference. In their opening remarks, John Battelle and Tim O’Reilly outlined their definition of the “Web as Platform”, where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that “customers are building your business for you”. They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be “harnessed” to create value.
Web 2.0 does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. Web 2.0 describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to Web sites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking sites, blogs, wikis, folksonomies, video sharing sites, hosted services, Web applications, and mashups. Terry Flew, in his 3rd Edition of New Media described what he believed to characterize the differences between Web 1.0 and Web 2.0:”[The] move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on tagging (folksonomy)”.
The mobile revolution
The process of change generally described as “Web 2.0” was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information “on the move” – and used socially, as opposed to items on a desk at home or just used for work.
Location-based services, services using location and other sensor information, and crowdsourcing (frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile-targeted websites (such as “m.website.com”) became common, designed especially for the new devices used. Netbooks, ultrabooks, widespread 4G and Wi-Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term “App” emerged (short for “Application program” or “Program”) as did the “App store“.
Networking in outer space
Main article: Interplanetary Internet
The first Internet link into low earth orbit was established on January 22, 2010 when astronaut T. J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space. (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station’s high-speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment.
Communication with spacecraft beyond earth orbit has traditionally been over point-to-point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, Delay-tolerant networking (DTN) which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space weather disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP Internet Protocol does. NASA conducted the first field test of what it calls the “deep space internet” in November 2008. Testing of DTN-based communications between the International Space Station and Earth (now termed Disruption-Tolerant Networking) has been ongoing since March 2009, and is scheduled to continue until March 2014.
This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-earth downlinks. According to a February 2011 statement by Google’s Vint Cerf, the so-called “Bundle protocols” have been uploaded to NASA’s EPOXI mission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds.
Main article: Internet governance
As a globally distributed network of voluntarily interconnected autonomous networks, the Internet operates without a central governing body. Each constituent network chooses the technologies and protocols it deploys from the technical standards that are developed by the Internet Engineering Task Force (IETF). However, successful interoperation of many networks requires certain parameters that must be common throughout the network. For managing such parameters, the Internet Assigned Numbers Authority (IANA) oversees the allocation and assignment of various technical identifiers. In addition, the Internet Corporation for Assigned Names and Numbers (ICANN) provides oversight and coordination for the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System.
NIC, InterNIC, IANA, and ICANN
The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to the Network Information Center (NIC) at Stanford Research Institute (SRI International) in Menlo Park, California. ISI’s Jonathan Postel managed the IANA, served as RFC Editor and performed other key roles until his premature death in 1998.
As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by ISI’s Paul Mockapetris in 1983. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.
The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366, which described the “growth of the Internet and its increasing globalization” and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of “continental dimensions”). Registries would be “unbiased and widely recognized by network providers and subscribers” within their region. The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group.
Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.
Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and the Federal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers. Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, the American Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of the National Science Foundation and became the third Regional Internet Registry.
In 1998, both the IANA and remaining DNS-related InterNIC functions were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with the IAB to define the technical work to be carried out by the Internet Assigned Numbers Authority. The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure. ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users.
Internet Engineering Task Force
The Internet Engineering Task Force (IETF) is the largest and most visible of several loosely related ad-hoc groups that provide technical direction for the Internet, including the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF).
The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized into Working Groups. Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet.
The IETF grew out of quarterly meeting of U.S. government-funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, The Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50%, even at meetings held in the United States.
The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG) and the Internet Architecture Board (IAB). The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer term research issues.
Request for Comments
Request for Comments (RFCs) are the main documentation for the work of the IAB, IESG, IETF, and IRTF. RFC 1, “Host Software”, was written by Steve Crocker at UCLA in April 1969, well before the IETF was created. Originally they were technical memos documenting aspects of ARPANET development and were edited by Jon Postel, the first RFC Editor.
RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics. RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that “obsoletes” the original.
The Internet Society
The Internet Society (ISOC) is an international, nonprofit organization founded during 1992 “to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world”. With offices near Washington, DC, USA, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form “chapters” based on either common geographical location or special interests. There are currently more than 90 chapters around the world.
ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). ISOC also promotes understanding and appreciation of the Internet model of open, transparent processes and consensus-based decision-making.
Globalization and Internet governance in the 21st century
Since the 1990s, the Internet’s governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision-making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000, and in September 2009, gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued. Finally, on October 1, 2016 ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community.
The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet’s ad-hoc standards body and issues Request for Comments.
In November 2005, the World Summit on the Information Society, held in Tunis, called for an Internet Governance Forum (IGF) to be convened by United Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow up meetings annually thereafter. Since WSIS, the term “Internet governance” has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.
Politicization of the Internet
Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become more politicized as it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet.
- The spreading of ideas and opinions;
- Recruitment of followers, and “coming together” of members of the public, for ideas, products, and causes;
- Providing and widely distributing and sharing information that might be deemed sensitive or relates to whistleblowing (and efforts by specific countries to prevent this by censorship);
- Criminal activity and terrorism (and resulting law enforcement use, together with its facilitation by mass surveillance);
- Politically-motivated fake news.
Main article: Net neutrality
|The examples and perspective in this section may not represent a worldwide view of the subject. You may improve this section, discuss the issue on the talk page, or create a new article, as appropriate. (April 2015) (Learn how and when to remove this template message)|
On April 23, 2014, the Federal Communications Commission (FCC) was reported to be considering a new rule that would permit Internet service providers to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On May 15, 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality. On November 10, 2014, President Obama recommended the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On January 16, 2015, Republicans presented legislation, in the form of a U.S. Congress HR discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers (ISPs). On January 31, 2015, AP News reported that the FCC will present the notion of applying (“with some caveats”) Title II (common carrier) of the Communications Act of 1934 to the internet in a vote expected on February 26, 2015. Adoption of this notion would reclassify internet service from one of information to one of telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC is expected to enforce net neutrality in its vote, according to The New York Times.
On February 26, 2015, the FCC ruled in favor of net neutrality by applying Title II (common carrier) of the Communications Act of 1934 and Section 706 of the Telecommunications act of 1996 to the Internet. The FCC chairman, Tom Wheeler, commented, “This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept.”
On March 12, 2015, the FCC released the specific details of the net neutrality rules. On April 13, 2015, the FCC published the final rule on its new “Net Neutrality” regulations.
On December 14, 2017, the F.C.C Repealed their March 12, 2015 decision by a 3–2 vote regarding net neutrality rules.
Use and culture
Email and Usenet
E-mail has often been called the killer application of the Internet. It predates the Internet, and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is undocumented, among the first systems to have such a facility were the System Development Corporation (SDC) Q32 and the Compatible Time-Sharing System (CTSS) at MIT.
The ARPANET computer network made a large contribution to the evolution of electronic mail. An experimental inter-system transferred mail on the ARPANET shortly after its creation. In 1971 Ray Tomlinson created what was to become the standard Internet electronic mail addressing format, using the @ sign to separate mailbox names from host names.
A number of protocols were developed to deliver messages among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM‘s VNET email system. Email could be passed this way between a number of networks, including ARPANET, BITNET and NSFNET, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol.
In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).
During the early years of the Internet, email and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the ‘alt.binary’ groups. Also, FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside email messages. The file was encoded, broken in pieces and sent by email; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned.
From Gopher to the WWW
As the Internet grew through the 1980s and early 1990s, many people realized the increasing need to be able to find and organize files and information. Projects such as Archie, Gopher, WAIS, and the FTP Archive list attempted to create ways to organize distributed data. In the early 1990s, Gopher, invented by Mark P. McCahill offered a viable alternative to the World Wide Web. However, in 1993 the World Wide Web saw many advances to indexing and ease of access through search engines, which often neglected Gopher and Gopherspace. As popularity increased through ease of use, investment incentives also grew until in the middle of 1994 the WWW’s popularity gained the upper hand. Then it became clear that Gopher and the other projects were doomed fall short.
One of the most promising user interface paradigms during this period was hypertext. The technology had been inspired by Vannevar Bush‘s “Memex“ and developed through Ted Nelson‘s research on Project Xanadu, Douglas Engelbart‘s research on NLS and Augment, and Andries van Dam‘s research from HES in 1968, through FRESS, Intermedia, and others. Many small self-contained hypertext systems had been created as well, such as Apple Computer’s HyperCard (1987). Gopher became the first commonly used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way.
In 1989, while working at CERN, Tim Berners-Lee invented a network-based implementation of the hypertext concept. By releasing his invention to public use, he encouraged widespread use. For his work in developing the World Wide Web, Berners-Lee received the Millennium technology prize in 2004. One early popular web browser, modeled after HyperCard, was ViolaWWW.
A turning point for the World Wide Web began with the introduction of the Mosaic web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana–Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991, also known as the “Gore Bill“. Mosaic’s graphical interface soon became more popular than Gopher, which at the time was primarily text-based, and the WWW became the preferred interface for accessing the Internet. (Gore’s reference to his role in “creating the Internet”, however, was ridiculed in his presidential election campaign. See the full article Al Gore and information technology).
Mosaic was superseded in 1994 by Andreessen’s Netscape Navigator, which replaced Mosaic as the world’s most popular browser. While it held this title for some time, eventually competition from Internet Explorer and a variety of other browsers almost completely displaced it. Another important event held on January 11, 1994, was The Superhighway Summit at UCLA‘s Royce Hall. This was the “first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications.”
24 Hours in Cyberspace, “the largest one-day online event” (February 8, 1996) up to that date, took place on the then-active website, cyber24.com. It was headed by photographer Rick Smolan. A photographic exhibition was unveiled at the Smithsonian Institution‘s National Museum of American History on January 23, 1997, featuring 70 photos from the project.
Main article: Search engine (computing)
Even before the World Wide Web, there were search engines that attempted to organize the Internet. The first of these was the Archie search engine from McGill University in 1990, followed in 1991 by WAIS and Gopher. All three of those systems predated the invention of the World Wide Web but all continued to index the Web and the rest of the Internet for several years after the Web appeared. There are still Gopher servers as of 2006, although there are a great many more web servers.
As the Web grew, search engines and Web directories were created to track pages on the Web and allow people to find things. The first full-text Web search engine was WebCrawler in 1994. Before WebCrawler, only Web page titles were searched. Another early search engine, Lycos, was created in 1993 as a university project, and was the first to achieve commercial success. During the late 1990s, both Web directories and Web search engines were popular—Yahoo! (founded 1994) and Altavista (founded 1995) were the respective industry leaders. By August 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines.
Database size, which had been a significant marketing feature through the early 2000s, was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first. Relevancy ranking first became a major issue circa 1996, when it became apparent that it was impractical to review full lists of results. Consequently, algorithms for relevancy ranking have continuously improved. Google’s PageRank method for ordering the results has received the most press, but all major search engines continually refine their ranking methodologies with a view toward improving the ordering of results. As of 2006, search engine rankings are more important than ever, so much so that an industry has developed (“search engine optimizers“, or “SEO”) to help web-developers improve their search ranking, and an entire body of case law has developed around matters that affect search engine rankings, such as use of trademarks in metatags. The sale of search rankings by some search engines has also created controversy among librarians and consumer advocates.
Resource or file sharing has been an important activity on computer networks from well before the Internet was established and was supported in a variety of ways including bulletin board systems (1978), Usenet (1980), Kermit (1981), and many others. The File Transfer Protocol (FTP) for use on the Internet was standardized in 1985 and is still in use today. A variety of tools were developed to aid the use of FTP by helping users discover files they might want to transfer, including the Wide Area Information Server (WAIS) in 1991, Gopher in 1991, Archie in 1991, Veronica in 1992, Jughead in 1993, Internet Relay Chat (IRC) in 1988, and eventually the World Wide Web (WWW) in 1991 with Web directories and Web search engines.
In 1999, Napster became the first peer-to-peer file sharing system. Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer-to-peer file sharing programs and services with different levels of decentralization and anonymity followed, including: Gnutella, eDonkey2000, and Freenet in 2000, FastTrack, Kazaa, Limewire, and BitTorrent in 2001, and Poisoned in 2003.
All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses. And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005, Kazaa in 2006, and Limewire in 2010 to shut down or refocus their efforts. The Pirate Bay, founded in Sweden in 2003, continues despite a trial and appeal in 2009 and 2010 that resulted in jail terms and large fines for several of its founders. File sharing remains contentious and controversial with charges of theft of intellectual property on the one hand and charges of censorship on the other.
Main article: Dot-com bubble
Suddenly the low price of reaching millions worldwide, and the possibility of selling to or hearing from those people at the same moment when they were reached, promised to overturn established business dogma in advertising, mail-order sales, customer relationship management, and many more areas. The web was a new killer app—it could bring together unrelated buyers and sellers in seamless and low-cost ways. Entrepreneurs around the world developed new business models, and ran to their nearest venture capitalist. While some of the new entrepreneurs had experience in business and economics, the majority were simply people with ideas, and did not manage the capital influx prudently. Additionally, many dot-com business plans were predicated on the assumption that by using the Internet, they would bypass the distribution channels of existing businesses and therefore not have to compete with them; when the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered, and the newcomers were left attempting to break into markets dominated by larger, more established businesses. Many did not have the ability to do so.
The dot-com bubble burst in March 2000, with the technology heavy NASDAQ Composite index peaking at 5,048.62 on March 10 (5,132.52 intraday), more than double its value just a year before. By 2001, the bubble’s deflation was running full speed. A majority of the dot-coms had ceased trading, after having burnt through their venture capital and IPO capital, often without ever making a profit. But despite this, the Internet continues to grow, driven by commerce, ever greater amounts of online information and knowledge and social networking.
Mobile phones and the Internet
See also: Mobile Web
The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model, and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001, the mobile phone email system by Research in Motion (now BlackBerry Limited) for their BlackBerry product was launched in America. To make efficient use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC. Developing countries followed, with India, South Africa, Kenya, the Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries. The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user.
Web pages were initially conceived as structured documents based upon Hypertext Markup Language (HTML) which can allow access to images, video, and other content. Hyperlinks in the page permit users to navigate to other pages. In the earliest browsers, images opened in a separate “helper” application. Marc Andreessen‘s 1993 Mosaic and 1994 Netscape introduced mixed text and images for non-technical users. HTML evolved during the 1990s, leading to HTML 4 which introduced large elements of CSS styling and, later, extensions to allow browser code to make calls and ask for content from servers in a structured way (AJAX).
There are nearly insurmountable problems in supplying a historiography of the Internet’s development. The process of digitization represents a twofold challenge both for historiography in general and, in particular, for historical communication research. A sense of the difficulty in documenting early developments that led to the internet can be gathered from the quote:
The Arpanet period is somewhat well documented because the corporation in charge – BBN – left a physical record. Moving into the NSFNET era, it became an extraordinarily decentralized process. The record exists in people’s basements, in closets. … So much of what happened was done verbally and on the basis of individual trust.— Doug Gale (2007)
- History of hypertext
- History of telecommunication
- History of the Internet in Sweden
- History of the web browser
- Index of Internet-related articles
- Internet activism
- Internet censorship
- List of Internet pioneers
- MH & xmh: Email for Users & Programmers
- Net neutrality
- Nerds 2.0.1: A Brief History of the Internet
- On the Internet, nobody knows you’re a dog
- Outline of the Internet
- ^ Kim, Byung-Keun (2005). Internationalising the Internet the Co-evolution of Influence and Technology. Edward Elgar. pp. 51–55. ISBN 978-1845426750.
- ^ “Brief History of the Internet”. Internet Society. Retrieved April 9, 2016. It happened that the work at MIT (1961–1967), at RAND (1962–1965), and at NPL (1964–1967) had all proceeded in parallel without any of the researchers knowing about the other work. The word ‘packet’ was adopted from the work at NPL
- ^ Turing’s Legacy: A History of Computing at the National Physical Laboratory 1945–1995, David M. Yates, National Museum of Science and Industry, 1997, pp. 126–146, ISBN 0901805947. Retrieved 19 May 2015.
- ^ “Data Communications at the National Physical Laboratory (1965–1975)”, Martin Campbell-Kelly, IEEE Annals of the History of Computing, Volume 9 Issue 3–4 (July–Sept 1987), pp. 221–247. Retrieved 18 May 2015.
- ^ “The First ISP”. Indra.com. August 13, 1992. Archived from the original on March 5, 2016. Retrieved 2015-10-17.
- ^ Couldry, Nick (2012). Media, Society, World: Social Theory and Digital Media Practice. London: Polity Press. p. 2. ISBN 9780745639208.
- ^ “The World’s Technological Capacity to Store, Communicate, and Compute Information”, Martin Hilbert and Priscila López (2011), Science, 332(6025), pp. 60–65; free access to the article through here: martinhilbert.net/WorldInfoCapacity.html
- ^ The Editorial Board (October 15, 2018). “There May Soon Be Three Internets. America’s Won’t Necessarily Be the Best. – A breakup of the web grants privacy, security and freedom to some, and not so much to others”. The New York Times. Retrieved October 16, 2018.
- ^ Jindal, R. P. (2009). “From millibits to terabits per second and beyond – Over 60 years of innovation”. 2009 2nd International Workshop on Electron Devices and Semiconductor Technology: 1–6. doi:10.1109/EDST.2009.5166093.
- ^ Jakubowski, A.; Łukasiak, L. (2010). “History of Semiconductors”. Journal of Telecommunications and Information Technology. nr 1: 3–9.
- ^ Lambert, Laura; Poole, Hilary W.; Woodford, Chris; Moschovitis, Christos J. P. (2005). The Internet: A Historical Encyclopedia. ABC-CLIO. p. 16. ISBN 9781851096596.
- ^ Gaudin, Sharon (December 12, 2007). “The transistor: The most important invention of the 20th century?”. Computerworld. Retrieved August 10, 2019.
- ^ “1960 – Metal Oxide Semiconductor (MOS) Transistor Demonstrated”. The Silicon Engine. Computer History Museum.
- ^ Lojek, Bo (2007). History of Semiconductor Engineering. Springer Science & Business Media. pp. 321–3. ISBN 9783540342588.
- ^ “Who Invented the Transistor?”. Computer History Museum. December 4, 2013. Retrieved July 20, 2019.
- ^ “Triumph of the MOS Transistor”. YouTube. Computer History Museum. August 6, 2010. Retrieved July 21, 2019.
- ^ Raymer, Michael G. (2009). The Silicon Web: Physics for the Internet Age. CRC Press. p. 365. ISBN 9781439803127.
- ^ “Transistors – an overview”. ScienceDirect. Retrieved August 8,2019.
- ^ Baliga, B. Jayant (2005). Silicon RF Power MOSFETS. World Scientific. ISBN 9789812561213.
- ^ Asif, Saad (2018). 5G Mobile Communications: Concepts and Technologies. CRC Press. pp. 128–134. ISBN 9780429881343.
- ^ O’Neill, A. (2008). “Asad Abidi Recognized for Work in RF-CMOS”. IEEE Solid-State Circuits Society Newsletter. 13 (1): 57–58. doi:10.1109/N-SSC.2008.4785694. ISSN 1098-4232.
- ^ Cherry, Steven (2004). “Edholm’s law of bandwidth”. IEEE Spectrum. 41 (7): 58–60. doi:10.1109/MSPEC.2004.1309810.
- ^ J. C. R. Licklider (March 1960). “Man-Computer Symbiosis”. IRE Transactions on Human Factors in Electronics. HFE-1: 4–11. doi:10.1109/thfe2.1960.4503259. Archived from the original on November 3, 2005. Retrieved January 25, 2014.
- ^ J. C. R. Licklider and Welden Clark (August 1962). “On-Line Man-Computer Communication” (PDF). AIEE-IRE ’62 (Spring): 113–128.
- ^ Licklider, J. C. R. (April 23, 1963). “Topics for Discussion at the Forthcoming Meeting, Memorandum For: Members and Affiliates of the Intergalactic Computer Network”. Washington, D.C.: Advanced Research Projects Agency. Retrieved January 26, 2013.
- ^ Robert Taylor in an interview with John Markoff (December 20, 1999). “An Internet Pioneer Ponders the Next Revolution”. The New York Times. Retrieved November 25, 2005.
- ^ “J.C.R. Licklider and the Universal Network”. The Internet. 2000.
- ^ Baran, Paul (May 27, 1960). “Reliable Digital Communications Using Unreliable Network Repeater Nodes” (PDF). The RAND Corporation: 1. Retrieved July 25, 2012.
- ^ “About Rand”. Paul Baran and the Origins of the Internet. Retrieved July 25, 2012.
- ^ “Inductee Details – Donald Watts Davies”. National Inventors Hall of Fame. Archived from the original on September 6, 2017. Retrieved September 6, 2017.
- ^ Cambell-Kelly, Martin (Autumn 2008). “Pioneer Profiles: Donald Davies”. Computer Resurrection (44). ISSN 0958-7403.
- ^ Gillies, James; Cailliau, Robert (2000). How the Web was Born: The Story of the World Wide Web. Oxford University Press. p. 26. ISBN 978-0192862075.
- ^ Ruthfield, Scott (September 1995). “The Internet’s History and Development From Wartime Tool to the Fish-Cam”. Crossroads. 2 (1). pp. 2–4. doi:10.1145/332198.332202. Archived from the original on October 18, 2007. Retrieved April 1, 2016.
- ^ Roberts, Dr. Lawrence G. (November 1978). “The Evolution of Packet Switching”. Archived from the original on March 24, 2016. Retrieved September 5, 2017. Almost immediately after the 1965 meeting, Donald Davies conceived of the details of a store-and-forward packet switching system
- ^ Roberts, Dr. Lawrence G. (May 1995). “The ARPANET & Computer Networks”. Archived from the original on March 24, 2016. Retrieved April 13, 2016.
- ^ Roberts, Dr. Lawrence G. (May 1995). “The ARPANET & Computer Networks”. Archived from the original on March 24, 2016. Retrieved April 13, 2016. Then in June 1966, Davies wrote a second internal paper, “Proposal for a Digital Communication Network” In which he coined the word packet,- a small sub part of the message the user wants to send, and also introduced the concept of an “Interface computer” to sit between the user equipment and the packet network.
- ^ K.G. Coffman & A.M. Odlyzco (May 22, 2002). Optical Fiber Telecommunications IV-B: Systems and Impairments. Optics and Photonics (edited by I. Kaminow & T. Li). Academic Press. pp. 1022 pages. ISBN 978-0123951731. Retrieved August 15,2015.
- ^ B. Steil, Council on Foreign Relations (2002). Technological Innovation and Economic Performance. published by Princeton University Press 1 Jan 2002, 476 pages. ISBN 978-0691090917. Retrieved August 15, 2015.
- ^ Scantlebury, R. A.; Wilkinson, P.T. (1974). “The National Physical Laboratory Data Communications Network”. Proceedings of the 2nd ICCC 74. pp. 223–228.
- ^ C. Hempstead, W. Worthington, ed. (2005). Encyclopedia of 20th-Century Technology. Routledge. ISBN 9781135455514.
- ^ Ward, Mark (October 29, 2009). “Celebrating 40 years of the net”. BBC News.
- ^ “The National Physical Laboratory Data Communications Netowrk”. 1974. Retrieved September 5, 2017.
- ^ “Donald Davies”. thocp.net.
- ^ “Donald Davies”. internethalloffame.org.
- ^ Strickland, Jonathan. “How ARPANET Works”.
- ^ Gromov, Gregory (1995). “Roads and Crossroads of Internet History”.
- ^ Hafner, Katie (1998). Where Wizards Stay Up Late: The Origins Of The Internet. Simon & Schuster. ISBN 978-0-684-83267-8.
- ^ Ronda Hauben (2001). “From the ARPANET to the Internet”. Retrieved May 28, 2009.
- ^ Postel, J. (November 1981). “The General Plan”. NCP/TCP transition plan. IETF. p. 2. doi:10.17487/RFC0801. RFC 801. Retrieved February 1, 2011.
- ^ “NORSAR and the Internet”. NORSAR. Archived from the original on June 7, 2009. Retrieved June 5, 2009.
- ^ The Merit Network, Inc. is an independent non-profit 501(c)(3) corporation governed by Michigan’s public universities. Merit receives administrative services under an agreement with the University of Michigan.
- ^ A Chronicle of Merit’s Early History Archived February 7, 2009, at the Wayback Machine, John Mulcahy, 1989, Merit Network, Ann Arbor, Michigan
- ^ Jump up to:a b Merit Network Timeline: 1970–1979 Archived January 1, 2016, at the Wayback Machine, Merit Network, Ann Arbor, Michigan
- ^ Merit Network Timeline: 1980–1989 Archived January 1, 2016, at the Wayback Machine, Merit Network, Ann Arbor, Michigan
- ^ “A Technical History of CYCLADES”. Technical Histories of the Internet & other Network Protocols. Computer Science Department, University of Texas Austin. Archived from the original on September 1, 2013.
- ^ “The Cyclades Experience: Results and Impacts”, Zimmermann, H., Proc. IFIP’77 Congress, Toronto, August 1977, pp. 465–469
- ^ tsbedh. “History of X.25, CCITT Plenary Assemblies and Book Colors”. Itu.int. Retrieved June 5, 2009.
- ^ “Events in British Telecomms History”. Events in British TelecommsHistory. Archived from the original on April 5, 2003. Retrieved November 25, 2005.
- ^ UUCP Internals Frequently Asked Questions
- ^ Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff (1999). “A Brief History of Internet”. arXiv:cs/9901011.
- ^ “Smithsonian Oral and Video Histories: Vinton Cerf”. National Museum of American History. Smithsonian Institution. April 24, 1990. Retrieved September 23, 2019.
- ^ “Internet History of 1970s”. Internet History. Computer History Museum. Retrieved September 23, 2019.
- ^ Vint Cerf, Yogen Dalal, Carl Sunshine, (December 1974), RFC 675, Specification of Internet Transmission Control Protocol
- ^ Panzaris, Georgios (2008). Machines and romances: the technical and harrative construction of networked computing as a general-purpose platform, 1960-1995. Stanford University. p. 128. Despite the misgivings of Xerox Corporation (which intended to make PUP the basis of a proprietary commercial networking product), researchers at Xerox PARC, including ARPANET pioneers Robert Metcalfe and Yogen Dalal, shared the basic contours of their research with colleagues at TCP and lnternet working group meetings in 1976 and 1977, suggesting the possible benefits of separating TCPs routing and transmission control functions into two discrete layers.
- ^ Jump up to:a b Pelkey, James L. (2007). “Yogen Dalal”. Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968-1988. Retrieved September 5, 2019.
- ^ “Computer History Museum and Web History Center Celebrate 30th Anniversary of Internet Milestone”. Retrieved November 22,2007.
- ^ Ogg, Erica (November 8, 2007). “‘Internet van’ helped drive evolution of the Web”. CNET. Retrieved November 12, 2011.
- ^ Jump up to:a b “BGP Analysis Reports”. Retrieved January 9, 2013.
- ^ Jump up to:a b 
- ^ 
- ^ Jon Postel, NCP/TCP Transition Plan, RFC 801
- ^ David Roessner; Barry Bozeman; Irwin Feller; Christopher Hill; Nils Newman (1997). “The Role of NSF’s Support of Engineering in Enabling Technological Innovation”. Archived from the originalon December 19, 2008. Retrieved May 28, 2009.
- ^ “RFC 675 – Specification of internet transmission control program”. Tools.ietf.org. Retrieved May 28, 2009.
- ^ Tanenbaum, Andrew S. (1996). Computer Networks. Prentice Hall. ISBN 978-0-13-394248-4.
- ^ Martin, Olivier (2012). The “Hidden” Prehistory of European Research Networking. Trafford Publishing. ISBN 978-1466938724.
- ^ Hauben, Ronda (2004). “The Internet: On its International Origins and Collaborative Vision”. Amateur Computerist. 12 (2). Retrieved May 29, 2009.
- ^ “Internet Access Provider Lists”. Archived from the originalon January 12, 2002. Retrieved May 10, 2012.
- ^ “RFC 1871 – CIDR and Classful Routing”. Tools.ietf.org. Retrieved May 28, 2009.
- ^ Ben Segal (1995). “A Short History of Internet Protocols at CERN”.
- ^ “A Brief History of the Internet in Korea (2005) – 한국 인터넷 역사 프로젝트”. sites.google.com. Retrieved May 30, 2016.
- ^ “Three decades since the introduction of the internet in Korea”. world.kbs.co.kr. Retrieved May 30, 2016.
- ^ “Internet History in Asia”. 16th APAN Meetings/Advanced Network Conference in Busan. Retrieved December 25, 2005.
- ^ “Percentage of Individuals using the Internet 2000–2012”, International Telecommunications Union (Geneva), June 2013, retrieved 22 June 2013
- ^ “Fixed (wired)-broadband subscriptions per 100 inhabitants 2012”, Dynamic Report, ITU ITC EYE, International Telecommunication Union. Retrieved on 29 June 2013.
- ^ “Active mobile-broadband subscriptions per 100 inhabitants 2012”, Dynamic Report, ITU ITC EYE, International Telecommunication Union. Retrieved on 29 June 2013.
- ^ “ICONS webpage”. Icons.afrinic.net. Archived from the original on May 9, 2007. Retrieved May 28, 2009.
- ^ Nepad, Eassy partnership ends in divorce,(South African) Financial Times FMTech, 2007
- ^ “APRICOT webpage”. Apricot.net. May 4, 2009. Retrieved May 28, 2009.
- ^ “A Brief History of the Internet in Korea”, Kilnam Chon, Hyunje Park, Kyungran Kang, and Youngeum Lee. Retrieved 16 April 2017.
- ^ “A brief history of the Internet in China”. China celebrates 10 years of being connected to the Internet. Retrieved December 25,2005.
- ^ “Internet host count history”. Internet Systems Consortium. Archived from the original on May 18, 2012. Retrieved May 16,2012.
- ^ “The World internet provider”. Retrieved May 28, 2009.
- ^ OGC-00-33R Department of Commerce: Relationship with the Internet Corporation for Assigned Names and Numbers (PDF). Government Accountability Office. July 7, 2000. p. 6.
- ^ Even after the appropriations act was amended in 1992 to give NSF more flexibility with regard to commercial traffic, NSF never felt that it could entirely do away with its Acceptable Use Policy and its restrictions on commercial traffic, see the response to Recommendation 5 in NSF’s response to the Inspector General’s review (a April 19, 1993 memo from Frederick Bernthal, Acting Director, to Linda Sundro, Inspector General, that is included at the end of Review of NSFNET, Office of the Inspector General, National Science Foundation, March 23, 1993)
- ^ Management of NSFNET, a transcript of the March 12, 1992 hearing before the Subcommittee on Science of the Committee on Science, Space, and Technology, U.S. House of Representatives, One Hundred Second Congress, Second Session, Hon. Rick Boucher, subcommittee chairman, presiding
- ^ “Retiring the NSFNET Backbone Service: Chronicling the End of an Era” Archived January 1, 2016, at the Wayback Machine, Susan R. Harris, PhD, and Elise Gerich, ConneXions, Vol. 10, No. 4, April 1996
- ^ “A Brief History of the Internet”.
- ^ NSF Solicitation 93-52 Archived March 5, 2016, at the Wayback Machine – Network Access Point Manager, Routing Arbiter, Regional Network Providers, and Very High Speed Backbone Network Services Provider for NSFNET and the NREN(SM) Program, May 6, 1993
- ^ “What is the difference between the Web and the Internet?”. W3C Help and FAQ. W3C. 2009. Retrieved July 16, 2015.
- ^ “World Wide Web Timeline“. Pews Research Center. March 11, 2014. Retrieved August 1, 2015.
- ^ Dewey, Caitlin (March 12, 2014). “36 Ways The Web Has Changed Us“. The Washington Post. Retrieved August 1, 2015.
- ^ “Website Analytics Tool“. Retrieved August 1, 2015.
- ^ “Tim Berners-Lee: WorldWideWeb, the first Web client”. W3.org.
- ^ Jump up to:a b “Frequently asked questions by the Press – Tim BL”. W3.org.
- ^ “Bloomberg Game Changers: Marc Andreessen”. Bloomberg.com. March 17, 2011.
- ^ “Browser”. Mashable. Retrieved September 2, 2011.
- ^ Vetter, Ronald J. (October 1994). “Mosaic and the World-Wide Web” (PDF). North Dakota State University. Archived from the original (PDF) on August 24, 2014. Retrieved November 20,2010.
- ^ Berners-Lee, Tim. “What were the first WWW browsers?”. World Wide Web Consortium. Retrieved June 15, 2010.
- ^ Viswanathan, Ganesh; Dutt Mathur, Punit; Yammiyavar, Pradeep (March 2010). “From Web 1.0 to Web 2.0 and beyond: Reviewing usability heuristic criteria taking music sites as case studies”. IndiaHCI Conference. Mumbai. Retrieved February 20, 2015.
- ^ Web 1.0 defined – How stuff works
- ^ “Web 1.0 Revisited – Too many stupid buttons”. Complexify.com. Archived February 16, 2006, at the Wayback Machine
- ^ “The Right Size of Software”.
- ^ Jurgenson, Nathan; Ritzer, George (February 2, 2012), Ritzer, George (ed.), “The Internet, Web 2.0, and Beyond”, The Wiley-Blackwell Companion to Sociology, John Wiley & Sons, Ltd, pp. 626–648, doi:10.1002/9781444347388.ch33, ISBN 9781444347388, retrieved October 11, 2019
- ^ Graham, Paul (November 2005). “Web 2.0”. Retrieved August 2, 2006. I first heard the phrase ‘Web 2.0’ in the name of the Web 2.0 conference in 2004.
- ^ O’Reilly, Tim (September 30, 2005). “What Is Web 2.0”. O’Reilly Network. Retrieved August 6, 2006.
- ^ Strickland, Jonathan (December 28, 2007). “How Web 2.0 Works”. computer.howstuffworks.com. Retrieved February 28,2015.
- ^ DiNucci, Darcy (1999). “Fragmented Future” (PDF). Print. 53(4): 32.
- ^ Idehen, Kingsley. 2003. RSS: INJAN (It’s not just about news). Blog. Blog Data Space. August 21 OpenLinkSW.com ArchivedNovember 28, 2009, at the Wayback Machine
- ^ Idehen, Kingsley. 2003. Jeff Bezos Comments about Web Services. Blog. Blog Data Space. September 25. OpenLinkSW.com
- ^ Knorr, Eric. 2003. The year of Web services. CIO, December 15.
- ^ “John Robb’s Weblog”. Jrobb.mindplex.org. Archived from the original on December 5, 2003. Retrieved February 6, 2011.
- ^ O’Reilly, Tim, and John Battelle. 2004. Opening Welcome: State of the Internet Industry. In San Francisco, California, October 5.
- ^ “Web 2.0: Compact Definition”. Scholar.googleusercontent.com. October 1, 2005. Retrieved June 15, 2013.[dead link]
- ^ Flew, Terry (2008). New Media: An Introduction (3rd ed.). Melbourne: Oxford University Press. p. 19.
- ^ “Twitter post”. January 22, 2010. Archived from the original on November 8, 2013. Retrieved 2013-03-10.
- ^ NASA Extends the World Wide Web Out Into Space. NASA media advisory M10-012, January 22, 2010. Archived
- ^ NASA Successfully Tests First Deep Space Internet. NASA media release 08-298, November 18, 2008 Archived
- ^ “Disruption Tolerant Networking for Space Operations (DTN). July 31, 2012”. Archived from the original on July 29, 2012. Retrieved August 26, 2012.
- ^ “Cerf: 2011 will be proving point for ‘InterPlanetary Internet'”. Network World interview with Vint Cerf. February 18, 2011. Archived from the original on May 24, 2012. Retrieved April 23,2012.
- ^ “Internet Architecture”. IAB Architectural Principles of the Internet. Retrieved April 10, 2012.
- ^ Jump up to:a b “DDN NIC”. IAB Recommended Policy on Distributing Internet Identifier Assignment. Retrieved December 26, 2005.
- ^ Internet Hall of Fame
- ^ Elizabeth Feinler, IEEE Annals [3B2-9] man2011030074.3d 29/7/011 11:54 Page 74
- ^ “GSI-Network Solutions”. TRANSITION OF NIC SERVICES. Retrieved December 26, 2005.
- ^ “Thomas v. NSI, Civ. No. 97-2412 (TFH), Sec. I.A. (DCDC April 6, 1998)”. Lw.bna.com. Archived from the original on December 22, 2008. Retrieved May 28, 2009.
- ^ “RFC 1366”. Guidelines for Management of IP Address Space. Retrieved April 10, 2012.
- ^ Jump up to:a b “Development of the Regional Internet Registry System”. Cisco. Retrieved April 10, 2012.
- ^ “NIS Manager Award Announced”. NSF Network information services awards. Archived from the original on May 24, 2005. Retrieved December 25, 2005.
- ^ “Internet Moves Toward Privatization”. http://www.nsf.gov. June 24, 1997.
- ^ “RFC 2860”. Memorandum of Understanding Concerning the Technical Work of the Internet Assigned Numbers Authority. Retrieved December 26, 2005.
- ^ “ICANN Bylaws”. Retrieved April 10, 2012.
- ^ Jump up to:a b c d e “The Tao of IETF: A Novice’s Guide to the Internet Engineering Task Force”, FYI 17 and RFC 4677, P. Hoffman and S. Harris, Internet Society, September 2006
- ^ “A Mission Statement for the IETF”, H. Alvestrand, Internet Society, BCP 95 and RFC 3935, October 2004
- ^ “An IESG charter”, H. Alvestrand, RFC 3710, Internet Society, February 2004
- ^ “Charter of the Internet Architecture Board (IAB)”, B. Carpenter, BCP 39 and RFC 2850, Internet Society, May 2000
- ^ “IAB Thoughts on the Role of the Internet Research Task Force (IRTF)”, S. Floyd, V. Paxson, A. Falk (eds), RFC 4440, Internet Society, March 2006
- ^ Jump up to:a b “The RFC Series and RFC Editor”, L. Daigle, RFC 4844, Internet Society, July 2007
- ^ “Not All RFCs are Standards”, C. Huitema, J. Postel, S. Crocker, RFC 1796, Internet Society, April 1995
- ^ Internet Society (ISOC) – Introduction to ISOC
- ^ Internet Society (ISOC) – ISOC’s Standards ActivitiesArchived December 13, 2011, at the Wayback Machine
- ^ USC/ICANN Transition Agreement
- ^ ICANN cuts cord to US government, gets broader oversight: ICANN, which oversees the Internet’s domain name system, is a private nonprofit that reports to the US Department of Commerce. Under a new agreement, that relationship will change, and ICANN’s accountability goes global Nate Anderson, September 30, 2009
- ^ Rhoads, Christopher (October 2, 2009). “U.S. Eases Grip Over Web Body: Move Addresses Criticisms as Internet Usage Becomes More Global”. The Wall Street Journal.
- ^ Rabkin, Jeremy; Eisenach, Jeffrey (October 2, 2009). “The U.S. Abandons the Internet: Multilateral governance of the domain name system risks censorship and repression”. The Wall Street Journal.
- ^ “Stewardship of IANA Functions Transitions to Global Internet Community as Contract with U.S. Government Ends – ICANN”. http://www.icann.org. Retrieved October 1, 2016.
- ^ Mueller, Milton L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. p. 67. ISBN 978-0-262-01459-5.
- ^ Mueller, Milton L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. pp. 79–80. ISBN 978-0-262-01459-5.
- ^ DeNardis, Laura, The Emerging Field of Internet Governance(September 17, 2010). Yale Information Society Project Working Paper Series.
- ^ Wyatt, Edward (April 23, 2014). “F.C.C., in ‘Net Neutrality’ Turnaround, Plans to Allow Fast Lane”. The New York Times. Retrieved April 23, 2014.
- ^ Staff (April 24, 2014). “Creating a Two-Speed Internet”. The New York Times. Retrieved April 25, 2014.
- ^ Carr, David (May 11, 2014). “Warnings Along F.C.C.’s Fast Lane”. The New York Times. Retrieved May 11, 2014.
- ^ Crawford, Susan (April 28, 2014). “The Wire Next Time”. The New York Times. Retrieved April 28, 2014.
- ^ Staff (May 15, 2014). “Searching for Fairness on the Internet”. The New York Times. Retrieved May 15, 2014.
- ^ Wyatt, Edward (May 15, 2014). “F.C.C. Backs Opening Net Rules for Debate”. The New York Times. Retrieved May 15, 2014.
- ^ Wyatt, Edward (November 10, 2014). “Obama Asks F.C.C. to Adopt Tough Net Neutrality Rules”. The New York Times. Retrieved November 15, 2014.
- ^ NYT Editorial Board (November 14, 2014). “Why the F.C.C. Should Heed President Obama on Internet Regulation”. The New York Times. Retrieved November 15, 2014.
- ^ Sepulveda, Ambassador Daniel A. (January 21, 2015). “The World Is Watching Our Net Neutrality Debate, So Let’s Get It Right”. Wired. Retrieved January 20, 2015.
- ^ Weisman, Jonathan (January 19, 2015). “Shifting Politics of Net Neutrality Debate Ahead of F.C.C. Vote”. The New York Times. Retrieved January 20, 2015.
- ^ Staff (January 16, 2015). “H. R. _ 114th Congress, 1st Session [Discussion Draft] – To amend the Communications Act of 1934 to ensure Internet openness…” (PDF). U. S. Congress. Retrieved January 20, 2015.
- ^ Lohr, Steve (February 2, 2015). “In Net Neutrality Push, F.C.C. Is Expected to Propose Regulating Internet Service as a Utility”. The New York Times. Retrieved February 2, 2015.
- ^ Lohr, Steve (February 2, 2015). “F.C.C. Chief Wants to Override State Laws Curbing Community Net Services”. The New York Times. Retrieved February 2, 2015.
- ^ Flaherty, Anne (January 31, 2015). “Just whose Internet is it? New federal rules may answer that”. Associated Press. Retrieved January 31, 2015.
- ^ Fung, Brian (January 2, 2015). “Get ready: The FCC says it will vote on net neutrality in February”. The Washington Post. Retrieved January 2, 2015.
- ^ Staff (January 2, 2015). “FCC to vote next month on net neutrality rules”. Associated Press. Retrieved January 2, 2015.
- ^ Lohr, Steve (February 4, 2015). “F.C.C. Plans Strong Hand to Regulate the Internet”. The New York Times. Retrieved February 5, 2015.
- ^ Wheeler, Tom (February 4, 2015). “FCC Chairman Tom Wheeler: This Is How We Will Ensure Net Neutrality”. Wired. Retrieved February 5, 2015.
- ^ The Editorial Board (February 6, 2015). “Courage and Good Sense at the F.C.C. – Net Neutrality’s Wise New Rules”. The New York Times. Retrieved February 6, 2015.
- ^ Weisman, Jonathan (February 24, 2015). “As Republicans Concede, F.C.C. Is Expected to Enforce Net Neutrality”. The New York Times. Retrieved February 24, 2015.
- ^ Lohr, Steve (February 25, 2015). “The Push for Net Neutrality Arose From Lack of Choice”. The New York Times. Retrieved February 25, 2015.
- ^ Staff (February 26, 2015). “FCC Adopts Strong, Sustainable Rules To Protect The Open Internet” (PDF). Federal Communications Commission. Retrieved February 26, 2015.
- ^ Ruiz, Rebecca R.; Lohr, Steve (February 26, 2015). “In Net Neutrality Victory, F.C.C. Classifies Broadband Internet Service as a Public Utility”. The New York Times. Retrieved February 26,2015.
- ^ Flaherty, Anne (February 25, 2015). “FACT CHECK: Talking heads skew ‘net neutrality’ debate”. Associated Press. Retrieved February 26, 2015.
- ^ Liebelson, Dana (February 26, 2015). “Net Neutrality Prevails in Historic FCC Vote”. The Huffington Post. Retrieved February 27,2015.
- ^ Ruiz, Rebecca R. (March 12, 2015). “F.C.C. Sets Net Neutrality Rules”. The New York Times. Retrieved March 13, 2015.
- ^ Sommer, Jeff (March 12, 2015). “What the Net Neutrality Rules Say”. The New York Times. Retrieved March 13, 2015.
- ^ FCC Staff (March 12, 2015). “Federal Communications Commission – FCC 15–24 – In the Matter of Protecting and Promoting the Open Internet – GN Docket No. 14-28 – Report and Order on Remand, Declaratory Ruling, and Order” (PDF). Federal Communications Commission. Retrieved March 13, 2015.
- ^ Reisinger, Don (April 13, 2015). “Net neutrality rules get published – let the lawsuits begin”. CNET. Retrieved April 13,2015.
- ^ Federal Communications Commission (April 13, 2015). “Protecting and Promoting the Open Internet – A Rule by the Federal Communications Commission on 04/13/2015”. Federal Register. Retrieved April 13, 2015.
- ^ Kang, Cecilia (December 14, 2017). “F.C.C. Repeals Net Neutrality Rules”. The New York Times. ISSN 0362-4331. Retrieved February 2, 2018.
- ^ “The Risks Digest”. Great moments in e-mail history. Retrieved April 27, 2006.
- ^ “The History of Electronic Mail”. Retrieved December 23, 2005.
- ^ “The First Network Email”. Retrieved December 23, 2005.
- ^ “Where Have all the Gophers Gone? Why the Web beat Gopher in the Battle for Protocol Mind Share”. Ils.unc.edu. Retrieved October 17, 2015.
- ^ Bush, Vannevar (1945). “As We May Think”. Retrieved May 28, 2009.
- ^ Douglas Engelbart (1962). “Augmenting Human Intellect: A Conceptual Framework”. Archived from the original on November 24, 2005. Retrieved November 25, 2005.
- ^ “The Early World Wide Web at SLAC”. The Early World Wide Web at SLAC: Documentation of the Early Web at SLAC. Retrieved November 25, 2005.
- ^ “Millennium Technology Prize 2004 awarded to inventor of World Wide Web”. Millennium Technology Prize. Archived from the original on August 30, 2007. Retrieved May 25, 2008.
- ^ “Mosaic Web Browser History – NCSA, Marc Andreessen, Eric Bina”. Livinginternet.com. Retrieved May 28, 2009.
- ^ “NCSA Mosaic – September 10, 1993 Demo”. Totic.org. Retrieved May 28, 2009.
- ^ “Vice President Al Gore’s ENIAC Anniversary Speech”. Cs.washington.edu. February 14, 1996. Retrieved May 28, 2009.
- ^ “UCLA Center for Communication Policy”. Digitalcenter.org. Archived from the original on May 26, 2009. Retrieved May 28,2009.
- ^ Mirror of Official site map Archived February 21, 2009, at the Wayback Machine
- ^ Mirror of Official Site Archived December 22, 2008, at the Wayback Machine
- ^ “24 Hours in Cyberspace (and more)”. Baychi.org. Retrieved May 28, 2009.
- ^ “The human face of cyberspace, painted in random images”. Archive.southcoasttoday.com. Retrieved May 28, 2009.
- ^ Stross, Randall (September 22, 2009). Planet Google: One Company’s Audacious Plan to Organize Everything We Know. Simon and Schuster. ISBN 978-1-4165-4696-2. Retrieved December 9, 2012.
- ^ “Microsoft’s New Search at Bing.com Helps People Make Better Decisions: Decision Engine goes beyond search to help customers deal with information overload (Press Release)”. Microsoft News Center. May 28, 2009. Archived from the original on June 29, 2011. Retrieved May 29, 2009.
- ^ “Microsoft and Yahoo seal web deal”, BBC Mobile News, July 29, 2009.
- ^ RFC 765: File Transfer Protocol (FTP), J. Postel and J. Reynolds, ISI, October 1985
- ^ Kenneth P. Birman (March 25, 2005). Reliable Distributed Systems: Technologies, Web Services, and Applications. Springer-Verlag New York Incorporated. ISBN 9780387215099. Retrieved January 20, 2012.
- ^ Menta, Richard (July 20, 2001). “Napster Clones Crush Napster. Take 6 out of the Top 10 Downloads on CNet”. MP3 Newswire.
- ^ Movie File-Sharing Booming: Study Archived February 17, 2012, at the Wayback Machine, Solutions Research Group, Toronto, 24 January 2006
- ^ Menta, Richard (December 9, 1999). “RIAA Sues Music Startup Napster for $20 Billion”. MP3 Newswire.
- ^ “EFF: What Peer-to-Peer Developers Need to Know about Copyright Law”. W2.eff.org. Archived from the original on January 15, 2012. Retrieved January 20, 2012.
- ^ Kobie, Nicole (November 26, 2010). “Pirate Bay trio lose appeal against jail sentences”. pcpro.co.uk. PCPRO. Retrieved November 26, 2010.
- ^ “Poll: Young Say File Sharing OK”, Bootie Cosgrove-Mather, CBS News, 11 February 2009
- ^ Green, Stuart P. (March 29, 2012). “OP-ED CONTRIBUTOR; When Stealing Isn’t Stealing”. The New York Times. p. 27.
- ^ Nasdaq peak of 5,048.62
- ^ Susmita Dasgupta; Somik V. Lall; David Wheeler (2001). Policy Reform, Economic Growth, and the Digital Divide: An Econometric Analysis. World Bank Publications. pp. 1–3. GGKEY:YLS5GEUEBAR. Retrieved February 11, 2013.
- ^ Hillebrand, Friedhelm (2002). Hillebrand, Friedhelm (ed.). GSM and UMTS, The Creation of Global Mobile Communications. John Wiley & Sons. ISBN 978-0-470-84322-2.
- ^ Christoph Classen, Susanne Kinnebrock & Maria Löblich (Eds.): Towards Web History: Sources, Methods, and Challenges in the Digital Age Archived May 9, 2013, at the Wayback Machine. In Historical Social Research 37 (4): 97–188. 2012.
- ^ Barras, Colin (August 23, 2007). “An Internet Pioneer Ponders the Next Revolution”. Illuminating the net’s Dark Ages. Retrieved February 26, 2008.
- Abbate, Janet (1999). Inventing the Internet. Cambridge, Massachusetts: MIT Press. ISBN 978-0262011723.
- Cerf, Vinton (1993). How the Internet Came to Be.
- Ryan, Johnny (2010). A history of the Internet and the digital future. London, England: Reaktion Books. ISBN 978-1861897770.
- Thomas Greene; Larry James Landweber; George Strawn (2003). “A Brief History of NSF and the Internet”. National Science Foundation.
- Internet History Timeline – Computer History Museum
- Histories of the Internet – Internet Society
- Hobbes’ Internet Timeline 12
- History of the Internet, a short animated film (2009)
- History of the Internet at Curlie
- What links here
- Related changes
- Upload file
- Special pages
- Permanent link
- Page information
- Wikidata item
- Cite this page
In other projects
- This page was last edited on 25 October 2019, at 23:21 (UTC).
História da Internet
|Esta página cita fontes confiáveis e independentes, mas que não cobrem todo o conteúdo (desde junho de 2019). Ajude a inserir referências. Conteúdo não verificável poderá ser removido.—Encontre fontes: Google (notícias, livros e acadêmico)|
A internet surgiu a partir de pesquisas militares no auge da Guerra Fria. Neste contexto, em que os dois blocos ideológicos e politicamente antagônicos exerciam enorme controle e influência no mundo, qualquer mecanismo, qualquer inovação, qualquer ferramenta nova poderia contribuir nessa disputa liderada pela União Soviética e pelos Estados Unidos: as duas superpotências compreendiam a eficácia e a necessidade absoluta dos meios de comunicação. Nessa perspectiva, o governo dos Estados Unidos temia um ataque russo às bases militares. Um ataque poderia trazer a público informações sigilosas, tornando os EUA vulneráveis. Então foi idealizado um modelo de troca e compartilhamento de informações que permitisse a descentralização das mesmas. Assim, se o Pentágono fosse atingido, as informações armazenadas ali não estariam perdidas. Era preciso, portanto, criar uma rede, a ARPANET, criada pela ARPA, sigla para Advanced Research Projects Agency. Em 1962, J. C. R. Licklider, do Instituto Tecnológico de Massachusetts (MIT), já falava em termos da criação de uma Rede Intergalática de Computadores (Intergalactic Computer Network, em inglês).[carece de fontes] A Internet também teve outros importantes atores que influenciaram o seu surgimento, dentre eles: professores universitários (como Ken King), estudantes (como Vint Cerf), empresas de tecnologia (como a IBM) e alguns políticos norte-americanos (como Al Gore); caindo-se, portanto, a tese que vigorava anteriormente que enfatizava somente a vertente militar da sua criação.
- 4Merit Network
- 7UUCP e Usenet
- 😯 Aparecimento da Internet em Portugal
- 9A Internet no Brasil e a RNP
- 11Ligações externas
A ARPANET funcionava através de um sistema conhecido como chaveamento de pacotes, que é um sistema de transmissão de dados em rede de computadores no qual as informações são divididas em pequenos pacotes, que por sua vez contém trecho dos dados, o endereço do destinatário e informações que permitiam a remontagem da mensagem original. O ataque inimigo nunca aconteceu, mas o que o Departamento de Defesa dos Estados Unidos não sabia era que dava início ao maior fenômeno midiático do século 20′, único meio de comunicação que em apenas 4 anos conseguiria atingir cerca de 50 milhões de pessoas.
Em 29 de Outubro de 1969 ocorreu a transmissão do que pode ser considerado o primeiro E-mail da história. O texto desse primeiro e-mail seria “LOGIN”, conforme desejava o Professor Leonard Kleinrock da Universidade da Califórnia em Los Angeles (UCLA), mas o computador no Stanford Research Institute, que recebia a mensagem, parou de funcionar após receber a letra “O”.
Já na década de 1970, a tensão entre URSS e EUA diminui. As duas potências entram definitivamente naquilo em que a história se encarregou de chamar de Coexistência Pacífica. Não havendo mais a iminência de um ataque imediato, o governo dos EUA permitiu que pesquisadores que desenvolvessem, nas suas respectivas universidades, estudos na área de defesa pudessem também entrar na ARPANET. Com isso, a ARPANET começou a ter dificuldades em administrar todo este sistema, devido ao grande e crescente número de localidades universitárias contidas nela.
Dividiu-se então este sistema em dois grupos, a MILNET, que possuía as localidades militares e a nova ARPANET, que possuía as localidades não militares. O desenvolvimento da rede, nesse ambiente mais livre, pôde então acontecer. Não só os pesquisadores como também os alunos e os amigos dos alunos, tiveram acesso aos estudos já empreendidos e somaram esforços para aperfeiçoá-los. Houve uma época nos Estados Unidos em que sequer se cogitava a possibilidade de comprar computadores prontos, já que a diversão estava em montá-los.
A mesma lógica se deu com a Internet. Jovens da contracultura, ideologicamente engajados em uma utopia de difusão da informação, contribuíram decisivamente para a formação da Internet como hoje é conhecida. A tal ponto que o sociólogo espanhol e estudioso da rede Manuel Castells afirmou no livro A Galáxia da Internet (2003) que A Internet é, acima de tudo, uma criação cultural.
Um sistema técnico denominado Protocolo de Internet (Internet Protocol) permitia que o tráfego de informações fosse encaminhado de uma rede para outra. Todas as redes conectadas pelo endereço IP na Internet comunicam-se para que todas possam trocar mensagens. Através da National Science Foundation, o governo norte-americano investiu na criação de backbones (que significa espinha dorsal, em português), que são poderosos computadores conectados por linhas que tem a capacidade de dar vazão a grandes fluxos de dados, como canais de fibra óptica, elos de satélite e elos de transmissão por rádio. Além desses backbones, existem os criados por empresas particulares. A elas são conectadas redes menores, de forma mais ou menos anárquica. É basicamente isto que consiste a Internet, que não tem um dono específico.
O cientista Tim Berners-Lee, do CERN, criou a World Wide Web em 1992.
A empresa norte-americana Netscape criou o protocolo HTTPS (HyperText Transfer Protocol Secure), possibilitando o envio de dados criptografados para transações comercias pela internet.
Por fim, vale destacar que já em 1992, o então senador Al Gore, já falava na Superhighway of Information. Essa “superestrada da informação” tinha como unidade básica de funcionamento a troca, compartilhamento e fluxo contínuo de informações pelos quatro cantos do mundo através de uma rede mundial, a Internet. O que se pode notar é que o interesse mundial aliado ao interesse comercial, que evidentemente observava o potencial financeiro e rentável daquela “novidade”, proporcionou o boom (explosão) e a popularização da Internet na década de 1990. Até 2003, cerca de mais de 600 milhões de pessoas estavam conectadas à rede. Segundo a Internet World Estatistics, em junho de 2007 este número se aproxima de 1 bilhão e 234 milhões de usuários.
Mapa lógico da Arpanet, Março de 1977
Promovido ao topo do serviço de processamento de informação na Darpa, Robert Taylor pretendia realizar as ideias de Licklider de sistemas de rede interconectado. Com Larry Roberts do MIT, ele começou o projeto para a construção da rede em questão. A primeira conexão ARPANET foi estabelecida entre a Universidade da California em Los Angeles e o Instituto de Pesquisa de Stanford às 22h30 do dia 29 de outubro de 1969.
Em 5 de dezembro de 1969, uma rede de 4 nós foi conectada pela adição da Universidade de Utah e a Universidade da California,Santa Barbara. Baseado em ideias desenvolvidas no Alohanet, o Arpanet evoluiu rapidamente. Em 1981, o número de hospedeiros cresceu para 213, com um novo hospedeiro sendo adicionado aproximadamente de 20 em 20 dias.
Arpanet se tornou o núcleo técnico do que poderia se tornar a Internet, e uma ferramenta primária no desenvolvimento de tecnologias utilizadas na época. O desenvolvimento da Arpanet foi centrado em torno de processos RFC> (Request for Comments), ainda usado atualmente para ofertar e distribuir Protocolos de Internet e sistemas.
Em 1965, Donald Davies do National Physical Laboratory (UK) propôs uma rede nacional de dados baseado em troca de pacotes. A proposta não foi aceita nacionalmente, mas em 1970 ele desenhou e construiu a rede de troca de pacotes Mark I para conhecer as necessidades do laboratório multidisciplinar e provar a tecnologia sob condições operacionais. Em 1976, 12 computadores e 75 dispositivos terminais foram juntados e mais foram adicionados ate a rede ser substituída em 1986.
O Merit Network é uma organização sem fins lucrativos pública norte-americana para operar a rede de computadores entre 3 universidades públicas do estado de Michigan com o intuito de ajudar no desenvolvimento econômico e educacional do estado. Foi formada em 1966 como Michigan Educational Research Information Triad pelas universidades Michigan State University, Universidade de Michigan e Wayne State University. A sede encontra-se na cidade de Ann Arbor. Com suporte inicial do estado de Michigan e da Fundação Nacional da Ciência (National Science Foundation, NSF), a rede de troca de pacotes foi demonstrada pela primeira vez em dezembro de 1971. É atualmente a mais longa rede de computadores regional dos Estados Unidos.
Mapa da rede de teste TCP/IP em fevereiro de 1982
Com muitos diferentes métodos de redes, alguma coisa era necessária para a unificação dos mesmos.Robert Kahn da DARPA e ARPANET recrutaram Vint Cerf da Universidade de Stanford para trabalhar com ele nesse problema. Em 1973, eles logo trabalharam com uma reformulação fundamental, onde as diferenças entre os protocolos de rede eram escondidas pelo uso de um protocolo inter-redes comum, e, ao invés da rede ser a responsável pela confiabilidade, como no ARPANET, os hospedeiros ficaram como responsáveis. Cerf creditou Hubert Zimmerman, Gerard LeLann e Louis Pouzin (designer da rede CYCLADES) como fundamentais nesse novo design de rede.
A especificação do protocolo resultante contém o primeiro uso atestado do termo internet, como abreviação de internetworking; depois RFCs repetiram seu uso, então a palavra começou como um adjetivo, ao invés do nome que é hoje. Com o papel da rede reduzida ao mínimo, ficou possível a junção de praticamente todas as redes, não importando suas características, assim, resolvendo o problema inicial de Kahn. O DARPA concordou em financiar o projeto de desenvolvimento do software, e depois de alguns anos de trabalho, a primeira demonstração de algo sobre gateway entre a rede de Packet Radio na Baía de SF área e a ARPANET foi conduzida. Em 22 de novembro de 1977, uma demonstração de árvore de redes foi conduzida incluindo a ARPANET, o Packet Radio Network e a rede Atlantic Packet Satellite – todas patrocinadas pela DARPA. Decorrentes das primeiras especificações do TCP em 1974, TCP/IP emergiu em meados do final de 1978, em forma quase definitiva. Em 1981, os padrões associados foram publicados como RFCs 791, 792 e 793 e adotado para uso. O DARPA patrocinou e incentivou o desenvolvimento de implementações TCP/IP para varios sistemas operacionais e depois programou uma migração de todos os hospedeiros de todas as suas redes de pacotes para o TCP/IP. Em 1º de janeiro de 1983, data conhecida como Flag Day, o protocolo TCP/IP se tornou o único protocolo aprovado pela ARPANET, substituindo o antigo protocolo NCP.
A rede de troca de pacotes CYCLADES foi uma rede de pesquisa francesa feita e dirigida por Louis Pouzin. Demonstrada pela primeira vez em 1973, foi desenvolvida para explorar alternativas para o design ARPANET inicial e para dar suporte a pesquisas de rede em geral. Foi a primeira rede a fazer dos hospedeiros responsáveis pelo transporte confiável de dados, ao invés da própria rede, usando datagramas não-confiáveis e mecanismos de protocolo fim-a-fim associados.Erro de citação: Elemento
<ref> inválido; nomes inválidos (por exemplo, são demasiados)
UUCP e Usenet
Em 1979, dois estudantes da Universidade de Duke, Tom Truscott e Jim Ellis, tiveram a ideia do uso de scripts simples de Bourne shell para a transferência de mensagens e noticias em uma conexão linha serial UUCP próximo a Universidade da Carolina do Norte em Chapel Hill. Após o lançamento público do software, o encaminhamento da malha de hospedeiros UUCP sobre noticias Usenet expandiu-se rapidamente. O UUCPnet, como viria a ser nomeado, também criou gateways e ligações entre FidoNet e hospedeiros BBS dial-up. As redes UUCP espalharam-se rapidamente devido aos custos mais reduzidos, capacidade de usar linhas alugadas existentes, links X.25 ou até mesmo conexões ARPANET, e a falta de políticas de uso estrito (organizações comerciais que pode fornecer correções de bugs) em comparação com as redes depois como CSnet e Bitnet. Todas as suas conexões eram locais. Em 1981, o número de hospedeiros UUCP tinha crescido para 550, quase dobrando para 940 em 1984. – Rede Sublink, operando desde 1987 e oficialmente fundada na Itália em 1989, baseia a sua interconectividade sobre UUCP para a redistribuição eletrônica de mensagens de grupos de notícias em todo o seu nós italianos (cerca de 100 na época) de propriedade tanto de pessoas físicas e pequenas empresas. Rede Sublink representou possivelmente um dos primeiros exemplos da tecnologia de internet se tornando o progresso através da difusão popular. 
O Aparecimento da Internet em Portugal
A Universidade de Lisboa foi a primeira entidade em Portugal a ter uma ligação à Internet. Pouco depois, a Universidade do Minho também o fez, usando uma linha de 64Kb (da Telepac, IP sobre X.25) para a França.
Em 1992 a FCCN inicia registos de domínios em .pt, e em Dezembro de 1993 existem 40 domínios .pt registados. O primeiro servidor web nacional foi activado pelo LNEC (Laboratório Nacional de Engenharia Civil) em 1992.
A Internet no Brasil e a RNP
No Brasil, os primeiros embriões de rede surgiram em 1988 e ligavam universidades do Brasil a instituições nos Estados Unidos. No mesmo ano, o Ibase começou a testar o Alternex, o primeiro serviço brasileiro de Internet não-acadêmica e não-governamental. Inicialmente o AlterNex era restrito aos membros do Ibase e associados e só em 1992 foi aberto ao público.
Em 1989, o Ministério da Ciência e Tecnologia lança um projeto pioneiro, a Rede Nacional de Ensino e Pesquisa (RNP). Existente ainda hoje, a RNP é uma organização de interesse público cuja principal missão é operar uma rede acadêmica de alcance nacional. Quando foi lançada, a organização tinha o objetivo de capacitar recursos humanos de alta tecnologia e difundir a tecnologia Internet através da implantação do primeiro backbone nacional.
O backbone funciona como uma espinha dorsal, é a infraestrutura que conecta todos os pontos de uma rede. O primeiro backbone brasileiro foi inaugurado em 1991, destinado exclusivamente à comunidade acadêmica. Mais tarde, em 1995, o governo resolveu abrir o backbone e fornecer conectividade a provedores de acesso comerciais. A partir dessa decisão, surgiu uma discussão sobre o papel da RNP como uma rede estritamente acadêmica com acesso livre para acadêmicos e taxada para todos dos outros consumidores. Com o crescimento da Internet comercial, a RNP voltou novamente a atenção para a comunidade científica.
A partir de 1997, iniciou-se uma nova fase na Internet brasileira. O aumento de acessos a rede e a necessidade de uma infraestrutura mais veloz e segura levou a investimentos em novas tecnologias. Entretanto, devido a carência de uma infraestrutura de fibra óptica que cobrisse todo o território nacional, primeiramente, optou-se pela criação de redes locais de alta velocidade, aproveitando a estrutura de algumas regiões metropolitanas. Como parte desses investimentos, em 2000, foi implantado o backbone RNP2 com o objetivo de interligar todo o país em uma rede de alta tecnologia. Atualmente, o RNP2 conecta os 27 estados brasileiros e interliga mais de 300 instituições de ensino superior e de pesquisa no país, como o INMETRO e suas sedes regionais.
Outro avanço alcançado pela RNP ocorreu em 2002. Nesse ano, o então presidente da república transformou a RNP em uma organização social. Com isso ela passa a ter maior autonomia administrativa para executar as tarefas e o poder público ganha meios de controle mais eficazes para avaliar e cobrar os resultados. Como objetivos dessa transformação estão o fornecimento de serviços de infraestrutura de redes IP avançadas, a implantação e a avaliação de novas tecnologias de rede, a disseminação dessas tecnologias e a capacitação de recursos humanos na área de segurança de redes, gerência e roteamento.
A base instalada de computadores no Brasil atinge 40 milhões, de acordo com pesquisa da Escola de Administração de Empresas de São Paulo da Fundação Getúlio Vargas. O número, que inclui computadores em empresas e residencias, representa um crescimento de 25% sobre a base registrada no mesmo período do ano passado.
- ↑ Sandroni, Araújo Gabriela (1 de janeiro de 2015). «Breve Historia y Origen del Internet». Academia.edu. Consultado em 25 de abril de 2015. Arquivado do original em 5 de maio de 2015
- ↑ Diário Digital. «Primeira mensagem de correio electrónico enviada há 40 anos». Consultado em 26 de outubro de 2009
- ↑ «RFC 1 – Host Software». tools.ietf.org. Consultado em 1º de junho de 2016
- ↑ Mark Ward. «Celebrando 40 anos da rede». Consultado em 25 de outubro de 2011
- ↑ «History». merit.edu. 2011. Consultado em 27 de outubro de 2011
- ↑ «Ann Arbor’s Merit Network looks to complete statewide fiberoptics network». concentratemedia.com. 2011. Consultado em 27 de outubro de 2011
- ↑ CERF, Vinton; DALAL, Yogen; SUNSHINE, Carl (1974). «RFC 675 – Specification of Internet Transmission Control Program» (em inglês). Network Working Group. Consultado em 30 de novembro de 2012
- ↑ THINK Protocols team. «História do Cyclades» (em inglês). Consultado em 25 de outubro de 2011. Arquivado do original em 1 de setembro de 2013
- ↑ Ian Lance Taylor. «UUCP Internals FAQ». Consultado em 25 de outubro de 2011
- ↑ «PUUG: Porta para a Internet desde 1990». Flickr. 24 de janeiro de 2009. Consultado em 30 de novembro de 2012
- ↑ «Os Fornecedores de Acesso à Internet». net.News. 30 de abril de 1996. Consultado em 30 de novembro de 2012
- Cronologia sobre a Internet em Portugal
- “A Brief History of the Internet” (Uma pequena história sobre a internet)
- Merit Network
Menu de navegação
- Página principal
- Conteúdo destacado
- Eventos atuais
- Página aleatória
- Informar um erro
- Loja da Wikipédia
- Página de testes
- Portal comunitário
- Mudanças recentes
- Criar página
- Páginas novas
- Páginas afluentes
- Alterações relacionadas
- Carregar ficheiro
- Páginas especiais
- Hiperligação permanente
- Informações da página
- Elemento Wikidata
- Citar esta página
- Esta página foi editada pela última vez às 18h28min de 21 de outubro de 2019.
- Este texto é disponibilizado nos termos da licença Atribuição-CompartilhaIgual 3.0 Não Adaptada (CC BY-SA 3.0) da Creative Commons; pode estar sujeito a condições adicionais. Para mais detalhes, consulte as condições de utilização.
- Política de privacidade
- Sobre a Wikipédia
- Avisos gerais
- Declaração sobre ”cookies”
- Versão móvel
Pesquisador reconstitui o dia, há 50 anos, em que criou a internet
De um laboratório na Universidade da Califórnia, Leonard Kleinrock realizou a primeira conexão da redeSérgio Matsuura27/10/2019 – 03:30 / Atualizado em 28/10/2019 – 10:11
RIO – Um experimento pouco ambicioso realizado há meio século nos EUA pavimentou o caminho para uma revolução com poucos precedentes na História da Humanidade. Parece exagero, mas não é. De um computador instalado num laboratório da Universidade da Califórnia, em Los Angeles (UCLA), o estudante de programação Charley Kline, supervisionado pelo professor Leonard Kleinrock, enviou uma mensagem para uma máquina no Stanford Research Institute (SRI). Nascia ali a Arpanet, rede que deu origem à internet como conhecemos hoje e que comemora seus 50 anos na terça-feira.PUBLICIDADEinRead invented by Teads
Em entrevista ao GLOBO, Kleinrock, de 85 anos, relembra que, curiosamente, o arquétipo da rede que hoje conecta mais de 4,5 bilhões de pessoas no planeta não suportou o envio da primeira palavra. Naquele dia 29 de outubro de 1969, o objetivo era testar os dois primeiros nós da rede enviando uma mensagem, um ancestral do e-mail, com a palavra “LOGIN” escrita no computador em Los Angeles. O então ambicioso objetivo era que os interlocutores em Stanford a vissem na tela de sua máquina, demonstrando ser possível controlar um computador remotamente. Kline teclou as duas primeiras letras, “L” e “O”, mas, ao pressionar o “G”, o sistema caiu.
— Houve uma falha no computador que estava no SRI, que causou sobrecarga na memória e o sistema caiu — conta Kleinrock, ainda professor da UCLA. — Isso foi consertado rapidamente e, logo em seguida, conseguimos fazer o “LOGIN”. Mas a primeira mensagem trocada na internet foi mesmo “LO”, como em “Lo and behold” (expressão em inglês para algo surpreendente).
Nas últimas cinco décadas, a rede evoluiu bastante, partindo de um ambiente restrito a pesquisadores e militares americanos para outro muito mais amplo, aberto ao público global. Contudo, as bases teóricas do experimento realizado na UCLA continuam as mesmas. Na época, a ideia era formar uma rede de comunicação descentralizada, com mensagens divididas em pequenos pacotes de dados para o tráfego. É assim que a internet funciona até hoje.PUBLICIDADE
Uso social surpreendeu
No entanto, os usos evoluíram sobremaneira. O objetivo original era compartilhar o tempo de processamento dos computadores instalados nas universidades. Pelo custo, estudantes e pesquisadores não possuíam máquinas poderosas em casa ou no escritório, então poderiam usar a Arpanet para usar os equipamentos dos outros remotamente. O e-mail, por exemplo, só surgiu em 1972.
— Na época, imaginei que a rede seria sobre pessoas conversando com computadores, computadores conversando com computadores, mas não sobre pessoas conversando com pessoas. Não previ algo importante: o aspecto de rede social da internet — conta Kleinrock. — Eu não percebi que minha mãe de 99 anos estaria na internet ao mesmo tempo que minha neta, de 7.
E foi essa função de facilitadora da comunicação que tornou a rede tão popular, opina o professor. Logo que surgiu, o e-mail dominava o tráfego da rede. Hoje, redes sociais como Facebook, Twitter e Instagram estão entre os serviços mais populares da internet, mas estão longe de serem os únicos.
A internet se tornou onipresente nas sociedades modernas. Teve um impacto decisivo no ganho de produtividade das economias e favoreceu o surgimento de empresas gigantes de tecnologia. Chega a ser difícil pensar em alguma atividade que seja completamente off-line.PUBLICIDADE
Pelas previsões de Kleinrock, a tendência é que a rede se aprofunde ainda mais no cotidiano das pessoas, até o ponto de se tornar “invisível”, como a eletricidade.
— A eletricidade tem uma interface maravilhosa: dois buracos na parede. Você coloca o plugue, e consegue algo chamado eletricidade, sem complicações — diz o professor. — A internet ainda não alcançou essa invisibilidade. É complicada de usar, cada aplicação tem seus protocolos próprios. Os teclados são diferentes e pequenos. A interface é difícil, apesar dos smartphones maravilhosos que temos por aí. Nós vamos alcançar a invisibilidade em alguns anos, mas ainda não chegamos lá.
Valorização . Bitcoin sobe 15%
Na primeira conexão o senhor tinha em mente a revolução que estava começando?
Não demos muita importância. Não tínhamos uma câmera, nem um gravador, apenas registros escritos. Mas eu tinha uma visão sobre o que a rede se tornaria. Poderia ser usada para que máquinas remotas nos ajudassem no processamento de certas coisas, como enviar uma imagem para conseguir uma edição, como acontece hoje. Mas, em particular, previ que a rede sempre estaria ligada e disponível. Que qualquer pessoa poderia se conectar de qualquer lugar, a qualquer hora, e que ela seria invisível.PUBLICIDADE
A Arpanet é frequentemente apontada como um projeto militar. Qual foi o papel dos militares no projeto?
Foi mínimo. Foi apenas o mecanismo para o dinheiro fluir do governo para pesquisadores como eu. E de mim para os estudantes. Não havia aplicação ou expectativa militar sobre nós, pesquisadores. Não era esse o caso. É uma lenda urbana o que dizem.
Como um dos pais da internet, o que acha dela hoje?
Os primeiros 20 anos foram liderados pelos engenheiros, formando uma base. Então, no fim dos anos 1980, ela começou a ultrapassar a fronteira da comunidade científica para o público em geral. E, assim que isso aconteceu, a monetização se tornou muito importante. De repente, todas as coisas negativas que acontecem quando as pessoas tentam ganhar dinheiro com a tecnologia começaram a acontecer. Foi quando surgiu o lado sombrio sobre a rede, com hackers , fraudes, roubos de identidade, falta de privacidade, rastreamento, pedofilia, fake news , tudo baseado na necessidade de gerar lucros e influência.
Como vê o impacto nas democracias?
Estou seriamente preocupado com isso. Há uma ameaça existencial à democracia. Apesar de termos dado voz a todo mundo, o que temos é muito ruído por aí, e os que são ouvidos são apenas os que falam mais alto. E essas vozes estão nos extremos, dos dois lados. Todo o resto se torna desinteressante. Então, os extremos são as vozes que estão sendo ouvidas.PUBLICIDADE
O que o senhor espera para o futuro da rede?
São duas perguntas. A primeira, sobre a infraestrutura, a tecnologia, não é difícil. A rede vai se tornar invisível. Isso não aconteceu ainda, mas acredito que acontecerá na próxima década. A segunda pergunta é sobre aplicações e serviços, mas é impossível prever. Nós não previmos o e-mail, o YouTube, as redes sociais, os serviços de buscas, o Napster. Então, o que posso prever é que não seremos capazes de prever aplicações, e isso é uma coisa boa. Nós criamos um sistema que vai continuar nos surpreendendo. E é aí que está a oportunidade para a próxima geração. Então, é uma resposta muito honesta. E muito otimista.
Testamos o 5G na Coreia do Sul e ele está em toda parte. Avisa até que é hora de trocar a fralda do bebê
Apoie o jornalismo profissionalA missão do GLOBO é a mesma desde 1925: levar informação confiável e relevante para ajudar os leitores a compreender melhor o Brasil e o mundo. São mais de 400 reportagens, artigos, fotos, vídeos e áudios publicados diariamente e produzidos de forma independente pela maior redação de jornal da América Latina. Ao assinar O GLOBO, você tem acesso a todo esse conteúdo.
RECEBA NOSSAS NEWSLETTERSVeja todas as newslettersSusan Boyle é tão magra agora e está lindaFinanceBlvd|PatrocinadoTrigêmeas fazem teste de DNA e médico revela notícia dolorosaDesafiomundial|PatrocinadoDores no ciático? Use isso pela manhãExtrato VMD³|PatrocinadoEles perderam tudo! A triste história desses famosos!Desafiomundial|PatrocinadoMétodo inovador para reduzir conta de luz vira febre em São José Do Rio PretoEconomizar Energia|PatrocinadoFamosos que morreram sem que ninguém soubesseTherapy Joker|PatrocinadoFoto de Jennifer Aniston sem maquiagem confirma rumoresTrading Blvd|PatrocinadoPasse longe: 12 carros que você deve evitar em 2019Tantas Emoções|PatrocinadoBolsonaro a repórter: ‘Continua não entendendo de economia, né?’Valor InvesteSucesso do Flamengo em campo também aparece no balanço?Valor InvesteSete ações do Ibovespa consolidam recorde de preço no fechamentoValor InvesteDrone captura o que ninguém deveria ver90MIN|PatrocinadoDrone incrível com preço acessível vira febre no BrasilDrone da Mininovas|PatrocinadoSe você ver ondas quadradas no oceano, saia da água imediatamenteTrendscatchers|PatrocinadoOnde Pelé mora aos 78 anos é de cortar o coraçãoHealthy George|Patrocinado15 cães perigosos que você não deve se aproximarWomenTales.com|PatrocinadoComo elas estão agora é de cortar o coraçãoFinancial Advisor Heroes|PatrocinadoJorge Garcia perdeu tanto peso que ele é quase irreconhecívelMisterStocks|PatrocinadoO Teste de QI: você está perto dos 130?Zoo|PatrocinadoComo ela está agora é de cortar o coraçãomilitary Bud|PatrocinadoPróstata grande com os dias contadosRenovaProst|PatrocinadoOs lugares mais perigosos e desertos do mundoJOL|PatrocinadoNaufrágio encontrado no deserto contém centenas de moedasSoolide|PatrocinadoLinha Elettro da Tramontina é ótima opção para transporte de cargas em propriedades ruraisRio receberá três mil imóveis ainda em 2019MRVBrasil recebe centro de tecnologia para tratamento de sementes. Conheça!
MAIS LIDAS NO GLOBO
- 1.Conmebol denuncia Flamengo por comemoração de Gabigol com cartaz na LibertadoresIgor Siqueira
- 2.Rio 2016: ‘Rei Arthur’ confirma compra de votos para sede dos Jogos OlímpicosChico Otavio e Daniel Biasetto
- 3.Em vídeo no Twitter, Bolsonaro se compara a leão e equipara STF a hiena prestes a atacá-loGustavo Maia
- 4.Linha Amarela deve ficar sem cobrança de pedágio por mais de um mês, diz presidente da concessionáriaGustavo Goulart
- 5.Calculadora mostra qual será o desconto no seu salário com as novas alíquotas da PrevidênciaO Globo
MAIS DE TECNOLOGIA
- Portal do Assinante
- Agência O Globo
- Fale conosco
- Anuncie conosco
- Trabalhe conosco
- Política de privacidade
- Termos de uso
© 1996 – 2019. Todos direitos reservados a Editora Globo S/A. Este material não pode ser publicado, transmitido por broadcast, reescrito ou redistribuído sem autorização.
História da Internet
Daniela DianaProfessora licenciada em Letras
A história da internet começa no ambiente da Guerra Fria (1945-1991) onde as duas super potências envolvidas, Estados Unidos e União Soviética, estavam divididos nos blocos socialista e capitalista e disputavam poderes e hegemonias.
Arpanet e a origem da internet
Com o intuito de facilitar a troca de informações, porque temiam ataques dos soviéticos, o Departamento de Defesa dos Estados Unidos (ARPA – Advanced Research Projects Agency) criou um sistema de compartilhamento de informações entre pessoas distantes geograficamente, a fim de facilitar as estratégias de guerra.
Nesse momento, surge o protótipo da primeira rede de internet, a Arpanet (Advanced Research Projects Agency Network).
Assim, no dia 29 de outubro de 1969 foi estabelecida a primeira conexão entre a Universidade da Califórnia e o Instituto de Pesquisa de Stanford. Foi um momento histórico, uma vez que o primeiro e-mail foi enviado.
Criação do www
Já na década de 90, o cientista, físico e professor britânico Tim Berners-Lee desenvolveu um navegador ou browser, a World Wide Web (www), a Rede Mundial de Computadores – Internet.
A partir disso, a década de 90 ficou conhecida como o “boom da internet”, pois foi quando ela se popularizou pelo mundo, com o surgimento de novos browsers ou navegadores — Internet Explorer, Netscape, Mozilla Firefox, Google Chrome, Opera, Lynx — e o aumento do número de usuários, navegadores da internet.
Diante disso, ocorre uma grande proliferação de sites, chats, redes sociais — orkut, facebook, msn, twitter —, tornando a internet a rede ou teia global de computadores conectados.
Alguns estudiosos acreditam que a Internet foi um marco importante e decisivo na evolução tecnológica. Isso porque ultrapassou barreiras ao aproximar pessoas, culturas, mundos e informações. Fato este que, segundo eles, não acontecia desde a chegada da televisão, na década de 50.
Hoje em dia, a Internet é utilizada mundialmente como ferramenta de trabalho, diversão, comunicação, educação, informação. Por isso, é comum ouvir: “eu não vivo sem internet“.
Além disso, pelo fato de os impostos serem menores, muitos produtos são comercializados em sites de compras.VEJA TAMBÉM: Inclusão Digital
Internet no Brasil
No Brasil, a Internet surgiu no final da década de 80, quando as universidades brasileiras começam a compartilhar algumas informações com os Estados Unidos.
Entretanto, foi a partir de 1989, quando fundou-se a Rede Nacional de Ensino e Pesquisa (RNP), que o projeto de divulgação e acesso ganhou força. O intuito principal era difundir a tecnologia da Internet pelo Brasil e facilitar a troca de informações e pesquisas.
Em 1997, criou-se as “redes locais de conexão” expandindo, dessa forma, o acesso a todo território nacional.
Em 2011, segundo dados do Ministério da Ciência e Tecnologia, aproximadamente 80% da população teve acesso à internet. Isso corresponde a 60 milhões de computadores em uso.
Não pare por aqui. Tem mais textos muito úteis para você:
- Hardware e Software
- Sociedade da Informação
- Meios de Comunicação: importância, história, tipos e classificação
- O que é hipertexto?
- História e Evolução dos Computadores
- Bill Gates: história e a fundação da Microsoft
CompartilharEnviarEmailEste artigo foi útil?SimNão
Daniela DianaLicenciada em Letras pela Universidade Estadual Paulista (Unesp) em 2008 e Bacharelada em Produção Cultural pela Universidade Federal Fluminense (UFF) em 2014. Amante das letras, artes e culturas, desde 2012 trabalha com produção e gestão de conteúdos on-line.
- Redes Sociais
- História e Evolução dos Computadores
- Linguagens, códigos e suas tecnologias – Enem
- Pop Art
- Gênero Textual E-mail
- Hardware e software
- História do Telefone
- Mulheres que Fizeram a História do Brasil
- História e Evolução dos Computadores
- Redes Sociais
- Inclusão Digital
- Hardware e software
- Animais em Extinção no Brasil
HistóriaInformáticapor taboolaLinks patrocinadosRecomendados Para VocêDrone incrível com preço acessível vira febre no BrasilDrone da MininovasSe você ver ondas quadradas no oceano, saia da água imediatamenteTrendscatchersEconomista faz lista dos melhores investimentos para o próximo anoEuQueroInvestir.comO segredo para comprar na Americanas que as pessoas não sabemCuponomiaCena que acabou com “Jeannie é um gênio”Desafio MundialIndústria da preguiça – O segredo que os 5 maiores bancos brasileiros não querem que você saibaNew York Capital | XP InvestimentosComo CitarContatoPolítica de PrivacidadeSobreTodas as MatériasPopularesRSS FeedArtigo revisado em 03/10/19
Toda Matéria: conteúdos escolares.
© 2011-2019 7GrausAd
- Twistle raises $16M to automate how healthcare providers talk to patients
- ‘This changes Moment’: Smartphone photo gear maker adds online lessons to growing marketplace
- Struggling dog-walking startup Wag reportedly explored selling to top rival Rover
- Microsoft discovers Russian hackers targeted sports organizations ahead of 2020 Olympics
- Judge limits role of former Amazon sales exec in AWS vs. Google Cloud non-compete lawsuit
- Vacation rental platform Vacasa raises huge $319M round — here’s how it differs from Airbnb
- Amazon makes grocery delivery free with Prime, ends $15/month fee under pressure from Walmart
- Google Cloud hiring spree: Alphabet adds 6,450 employees in Q3, cites cloud as biggest factor
- T-Mobile adds 1.7M customers, says it now expects Sprint merger to close in ‘early 2020’
- Virgin Galactic soars and then levels off on first day as publicly traded space company
- Startup advice: Here’s what founders should do before pitching venture capitalists
- First Mode joins Arizona State’s team to flesh out a plan for a marathon moon rover
- What is JEDI? Explaining the $10B military cloud contract that Microsoft just won over Amazon
- Amazon and Jeff Bezos among the tech punchlines as ‘Silicon Valley’ returns to HBO for 6th season
- 3 types of ‘intelligent applications’ that get investors excited
- Microsoft doubles down on Internet of Things with new features to simplify building connected devices
- Lessons from 15-year rover mission: Mars ain’t the kind of place to raise your kids
- Microsoft and Jackson Lab tackle cancer genetics with ‘human-computer symbiosis’
- Air Force’s mysterious X-37B space plane lands after spending 780 days in orbit
- Week in Review: Most popular stories on GeekWire for the week of Oct. 20, 2019
Microsoft and Jackson Lab tackle cancer genetics with ‘human-computer symbiosis’
The rise of therapies that target tumors based on their genetic profile is revolutionizing how we treat cancer. It’s also creating a data overload.
Around 17 million people are diagnosed with cancer each year, and hundreds of cancer studies are published each day. The challenge for doctors is narrowing that down to one patient and their treatment.
“There’s so much data around. It really becomes a complex question,” said Dr. Sue Mockus, director of product innovation and strategic commercialization at the Jackson Laboratory (Jax).
Microsoft and Jax created a system, called Clinical Knowledgebase, that uses natural language processing to find clinically relevant needles in the haystack of published studies.
In order to match a tumor’s genetic profile to the right treatment, hospitals often employ a tumor board, which is comprised of a group of experts from different specialties who can assess the available treatments. Clinical Knowledgebase aims to supercharge oncologists with what amounts to a search engine that can return all of the relevant sections from thousands of studies in a few seconds.
“Sequencing has become really cheap and affordable. Very soon we can actually afford to sequence basically every cancer patient,” said Hoifung Poon, director of precision health natural language processing at Microsoft. The challenge, he said, will be acting on that genetic information efficiently.
The goal is to make the most of human specialists and computers. Machines are great at scaling — hence their role in filtering out studies — while humans are best at applying specific knowledge to find the most promising treatment for each patient.
“What we have been focusing on is trying to leverage the sweet spot of human-computer symbiosis,” Poon said.
The effort is part of Project Hanover, an initiative launched by Microsoft in 2016 to “solve” cancer. More than 70,000 users across 149 countries have taken advantage of Jax’s platform already.
The Jackson Laboratory is a 90-year-old nonprofit research institution in Bar Harbor, Maine, that is known as the repository for thousands of genetic strains of mice used in clinical research.
Microsoft’s growing list of partnerships with healthcare institutions includes Humana, Adaptive Biotechnologies, Providence St. Joseph Health, UCLA Health and University of Pittsburgh Medical Center. In a separate effort, Microsoft created a chatbot that matches patients to clinical trials. Among tech giants, Google parent Alphabet has arguably been the most active in healthcare research efforts, notably using AI to diagnose eye disease and breast cancer.
Seattle-based journalist James Thorne is an NYU business and economics journalism grad who has written for publications including Reuters, CNBC, and Financial Planning. Reach him at email@example.com and follow him on Twitter @jamescthorne.
Make the GeekWire Gala your holiday party! Seattle’s geekiest holiday party returns Thursday, Dec. 5 at the Showbox at the Market. You’ll see familiar faces, make new friends and enjoy an evening of conversation, tasty treats and festive cocktails with great music in the heart of Seattle. A dance party also has been known to break out, so bring out the whole team. Early-bird tickets available here.Post a Comment Share 27 TweetShareRedditEmail
- Previous StoryAir Force’s mysterious X-37B space plane lands after spending 780 days in orbit
- Next StoryLessons from 15-year rover mission: Mars ain’t the kind of place to raise your kids
Subscribe to GeekWire’s free newsletters to catch every headlineEmail addressSubscribe
Send Us a Tip
Have a scoop that you’d like GeekWire to cover? Let us know.Send Us a Tip
See MoreGeekWire Events
Most Popular on GeekWire
- Amazon ‘surprised’ after Pentagon awards coveted $10 billion JEDI cloud contract to rival Microsoft
- Microsoft: ‘We are proud that we are an integral partner’ of DoD after winning $10B JEDI contract
- Univ. of Washington computer science experts raise $3.9M for machine learning startup OctoML
- What is JEDI? Explaining the $10B military cloud contract that Microsoft just won over Amazon
Job Listings on GeekWork
CTO-in-Residence // Co-FounderALLEN INSTITUTE FOR ARTIFICIAL INTELLIGENCE (AI2)
Artificial Intelligence Technical Expert (Senior Level)United States Patent and Trademark Office
Desktop AnalystThe School for Field Studies
Senior UI DesignerALLEN INSTITUTE FOR ARTIFICIAL INTELLIGENCE (AI2)
A Word From Our Sponsors
Building a home with heartContinue reading
Amazon announces: The Climate PledgeLearn more about The Climate Pledge.
Shape your futureLearn more at seattleu.edu/computerscience
Upskilling 2025Learn more…
Wicket Labs maintains recruiting edge with strong culture, robust benefits packageRead More
- About GeekWire
- Contact Us
- Ask About Advertising
- Send Us a Tip
- Apply for Startup Spotlight
- Apply for Geek of the Week
- Become a GeekWire Member
- Join Our Startup List
- Reprints and Permissions
Catch every headline in your inboxEmail addressSubscribe