Date:
Author: Peter T. Kirstein, Professor of Computer Communications Science, UCL
Original article: https://theconversation.com/how-britain-got-its-first-internet-connection-by-the-late-pioneer-who-created-the-first-password-on-the-internet-45404
British computer scientist and Internet Hall of Fame inductee Peter Kirstein died in January 2020 at the age of 86, after a nearly 50-year career at UCL. A few years before he died, he was commissioned by then Conversation technology editor Michael Parker (now director of operations) to write an in-depth piece originally intended as part of a special series on the internet. It wasn’t published at the time, as the series was postponed, but now to mark Professor Kirsten’s contributions we are delighted to be able to publish his reflections on the challenges he faced connecting the UK in the early 1970s to the forerunner of what would become the modern internet. The article was edited by Michael with oversight kindly provided by Professor Jon Crowcroft, a colleague of Professor Kirstein’s.
The internet has become the most prevalent communications technology the world has ever seen. Though there are more fixed and mobile telephone connections, even they use internet technology in their core. For all the many uses the internet allows for today, its origins lie in the cold war and the need for a defence communications network that could survive a nuclear strike. But that defence communications network quickly became used for general communications and within only a few years of the first transmission, traffic on the predecessor to today’s internet was already 75% email.
In the beginning
Arpanet was the vital precursor of today’s internet, commissioned by the US Defence Advanced Research Projects Agency (Darpa) in 1969. In his interesting account of why Arpanet came about, Stephen Lukasic, Director of Darpa from 1970-75, wrote that if its true nature and impact had been realised it would never have been permitted under the US government structure of the time. The concept for a decentralised communications technology that would survive a nuclear attack would have placed it outside Darpa’s remit (as defence communications specifically were assigned to a different agency), so the focus changed to how to connect computers together so that major applications could be run on the most appropriate system available.
This was in the era of time-sharing computers. Today’s familiar world of the ubiquitous “personal computer” on each desk was decades away. Computers of this time were generally very large, filling entire rooms, and comparatively rare. Users working at connected terminals would submit jobs to the computer which would allocate processing time for the job when available. The idea went that if these computers were networked together, an available remote computer could process a job even when the computers closer to the users were full. The resulting network was called Arpanet and the first packets of data traversed the network in September 1969.
At this time the computing industry was dominated by a few large companies, which produced products that would work only with others from the same company. However the Arpanet concept included a vital decision on how the network would function: it sharply distinguished and separated the technology and medium that would carry the communications (satellite link, copper cable, fibre optic), the network layer (the software that manages communications between different computers), and applications (the programs that users run over the network to do work) from one another.
This contrasted with the vertical “stove-pipe” philosophy that persisted among computer manufacturers at the time, where any networking that existed worked only in specific situations and for specific computer systems. For example, IBM computers could communicate using IBM’s SNA protocol, but not with non-IBM equipment. The direction Arpanet took was manufacturer-agnostic, where different types of computers could be networked together.
First footprint in Europe
In 1970, the leading network research outside the US was a group at the National Physical Laboratory (NPL) in London led by Donald Davies. Davies had built a network with similar concepts to Arpanet, and as one of the inventors of packet-switching his work had influenced the direction of Arpanet. But despite his plans for a national digital network, he was prevented from extending his project outside the lab by pressure from the British Post Office, which then held a monopoly on telecommunications.
Around this time the director of the Arpanet project, Larry Roberts, proposed connecting Arpanet to Davies’ NPL network in the UK. This would be possible because a few years previously a large seismic array in Norway run by Norwegian researchers for Darpa had been connected to Arpanet via a dedicated 2.4 Kbps connection to Washington. Due to the transatlantic technology of the time, this was by satellite link via the only earth station for satellite communications in Europe, in Goonhilly, Cornwall, and thence by cable to Oslo. Larry proposed to interrupt the connection in London, connect the NPL network, and then continue to Norway.
Since the international communications were the main cost, this seemed straightforward. Unfortunately Britain was at this point negotiating to join the Common Market, and the UK government was afraid that closer links with the US would jeopardise the talks. When the government refused NPL permission to participate, as I was doing relevant research at the University of London’s Institute of Computer Science and subsequently at UCL, I was the obvious alternative.
Vaulting many non-technical hurdles
From the beginning I proposed a twin approach. I would connect the large computers at the University of London and the Rutherford and Appleton laboratories (RAL) in Oxfordshire, which were hubs for other UK computer networks, and I would provide services to allow UK researchers to use the networks to collaborate with colleagues in the US.
This novel approach would mean the IBM System 360/195 at RAL, then the most powerful computer in the UK, would be made available as a remote host – available to those in the US on the other side of the transatlantic link, without being directly connected to the interface message processor – the equipment which sent and received messages between Arapanet nodes, which would be installed in UCL.
Unfortunately there then came many non-technical hurdles. I attempted to get other universities’ computer science departments to back the project, but this foundered because the Science Research Council did not consider the opportunity worth funding. The UK Department of Industry wanted a statement of interest from industry before funding, but even though I knew executives at ICL, the UK’s principal computer manufacturer, after months of agonising it declined stating that “one would gain more from a two-week visit to the US than from a physical link”. Consequently after a year of back and forth I had nothing.
However by 1973 the project was becoming a reality. By now the Norwegian siesmic array, Norsar, was connected to Arpanet via a newly opened satellite earth station at Tanum in Sweden, and so there was no longer a link via the UK at all. Now what was required was a link from UCL to Oslo. With a small grant of £5,000 from Donald Davies at the NPL, and the provision by the British Post Office of a 9.6 Kbps link to Oslo without charge for one year, we had the resources to proceed.
Darpa duly shipped its message processor with which to connect the new London node to Arpanet. It was promptly impounded at Heathrow Airport for import duty and the newly introduced Value Added Tax. I managed to avoid paying the duty by declaring it an “instrument on loan”, but it took all my available funds to provide a guarantee that would allow me to get hold of the equipment pending an appeal. With the equipment finally installed, in July 1973 I connected the first computers outside the US to the Arpanet, sending a transmission from London, via Norway, through the Arpanet to the Information Science Institute at the University of Southern California.
First password on the internet
Within three months my group was able to implement the Arpanet network protocols and translate them to the IBM protocols necessary to communicate with computers at RAL. And so, once connected to the wider network through our gateway at UCL, the IBM computer at RAL became one of the most powerful on the Arpanet.
When I gave a talk stating this fact, RAL staff first did not believe me; they still saw only my small minicomputer, without understanding that it was the gateway to the rest of the Arpanet on the other side of the link. On realising they became very concerned that access to their computer services would be available not only to me, but with my complicity to the whole research community in the US.
However, I had been concerned that I would, in exactly this way, be criticised for improper use of both UK and US facilities. So from the beginning I put password protection on my gateway. This had been done in such a way that even if UK users telephoned directly into the communications computer provided by Darpa in UCL, they would require a password.
In fact this was the first password on Arpanet. It proved invaluable in satisfying authorities on both sides of the Atlantic for the 15 years I ran the service – during which no security breach occurred over my link. I also put in place a system of governance that any UK users had to be approved by a committee which I chaired but which also had UK government and British Post Office representation.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
The transatlantic connection included terminal services (which connected users to remote computers to run jobs), file access and later email services. It was immediately very popular. Within a couple of years, I was supported by half a dozen government ministries, with leased line links (a dedicated line) to five remote sites – some of which allowed access through their own networks. Other users could telephone into my UCL site, or use the fledgling post office data network to which I also provided access.
Indeed, its profile had become so prominent that when the Queen opened a building at the Ministry of Defence’s Royal Radar Establishment at Malvern in Worcestershire in 1976 (which had taken over funding the leased line to Oslo), this was accompanied by her inaugurating the connection by sending an email – the first to be sent by a head of state.
As the UK side of Arpanet continued growing, additional message processors had to be imported, each one racking up additional VAT and duty to be paid, pending the outcome of the appeal. Finally in 1976 the appeal was refused. But a meeting with senior treasury officials subsequently led to an agreement that my research group would be permitted to import equipment free of VAT and duty. The importance of this ruling cannot be overemphasised for ensuring the independence of our operation: over the following decade many government bodies considered trying take it over, and each time would be discouraged by the magnitude of the VAT and duty bill they would incur.
Agreeing the language of Arpanet
In their 1975 paper Bob Kahn at Darpa and Vint Cerf at Stanford University made the next vital contribution towards building the internet of today when they formulated the concept of connecting together different network technologies – such as those defined by different computer manufacturers, or designed for different communications media such as cable, satellite link or radio waves – with a common inter-network layer, which would come to be known as TCP/IP.
Transport Control Protocol (TCP) managed the packaging and unpacking of data sent between computers, while Internet Protocol (IP) provided the pathfinding to ensure the data packets reached the intended destination. One of the important aspects of IP was that it allowed scalability: the 8-bit number previously used to identify a computer on the network that allowed just 256 devices suddenly increased to a 32-bit number, which allowed 4 billion devices.
I misjudged how successful TCP/IP would be. In one of the first papers on network interconnection Cerf argued that all computers should adopt TCP/IP, but I felt that this was unrealistic, and that gateways like the interface message processors were needed to “translate” communications between networks. While for the first 15 years my view prevailed, eventually in the long run Cerf’s view was the right one.
At UCL, my group participated in the first independent TCP/IP implementations, connecting in 1977 for the first time networks using a different technology to Arpanet. This saw three different types of network, Arpanet, the satellite network Satnet, and PRNET, a packet-radio network using radio transmissions from mobile vans, all connected using the same common “language”, TCP/IP. This was in essence the first demonstration of the internet – a network of networks.
Later, we connected the first multi-service heterogeneous network outside the US (Janet, the UK’s academic network connecting universities) to Arpanet, and then to the internet in the early 1980s. Indeed, UCL was the first organisation on Arpanet to adopt TCP/IP as standard.
During the 1980s the internet approach took over, where computers used TCP/IP to manage their own connections to the network. Darpa provided funding to add TCP/IP into its chosen operating system of the time, BSD, and this was later made available to the public.
After the release of the IBM PC microcomputer in 1981 there was a rapid growth of cheap (relatively speaking) personal computers in offices connected to each other by ethernet networks. And routers (small devices to connect networks) were developed that made the huge, outdated interface message processors used with the original Arpanet obsolete.
The universal adoption of common protocols that provided useful services like virtual terminal (telnet), file transfer (FTP), directory (LDAP) and email (SMTP) made the internet an invaluable tool for researchers. As fibre optic installations became more economical it allowed networks to scale up to very large numbers of interconnected computers. The internet’s most widespread and largest use by volume was still email, but a number of shared data repositories and resources developed.
Then in 1989, with the development of the World Wide Web, Tim Berners-Lee provided the killer application that would make the internet essential to all types of commercial and government use. The simplicity and ease of use of the web and web browsers, together with the internet as the distribution mechanism underpinning it, laid the basis for the universal use of the internet we have today.
The little black book of the internet
Back when there were even only a few hundred computers, discovering their addresses and maintaining a directory of them had become impractical. Bob Kahn, then director of the relevant office at Darpa, remedied this problem by commissioning the Domain Name Service. This mapped IP addresses to names organised in hierarchical structure. The effect was a sort of directory of internet-connected computers, where top level domains (such as .com, .org, .uk, .fr) lay above second level domains (such as .ac.uk, .co.uk, or microsoft.com, wikipedia.org), which in turn lay above domains below them (such as www.microsoft.com or www.wikipedia.org, where the www. represents a subdomain below the domain). This domain model forms the basis of the URLs that we type into our browser address bars today.
Although four billion addresses seemed near infinite in 1974, by the early 1990s it was already evident that the internet would soon run out of IP (IPv4) addresses, necessary for computers to be connected to the internet. Work on the next generation of IP, IPv6, was to increase the number of routable network addresses from 32-bit (232, or 4 billion) to 128-bit (or 2128 or 3.4×1029 billion) addresses. Technical fixes managed to extend the lifetime of IPv4, but over the last few years the need to move to IPv6 has become pressing, and adoption is now happening faster.
Growth and change
Over the last two decades, the emergence of social networks, the increasing availability of internet streaming media and the integration of mobile telephone networks with the internet have hugely increased demand for internet capacity. Such demand will require large investments to meet, but probably without any radical rethink of the internet’s architecture. The number of internet-connected devices is growing significantly, but we can assume that it would increase only to a small multiple of the world’s population. So even if the protocols that govern how devices connect to the internet had to change to cope with demand, this could be achieved within only a few years.
The ability to monitor the activities of people – with or without their knowledge – is one important outcome of so many people so frequently connected to the network. The ability by unauthorised individuals to hack into private systems, to obtain private data or damage operations, are very worrying developments. The advances in computer and network security needed require massive research and development, and new legal and regulatory powers. And an even more disruptive development now looms, the Internet of Things.
Increasingly devices and equipment found in all aspects of our life may incorporate sensors and actuators that can be operated remotely. The estimated numbers of devices to be network-connected is much larger: as many as hundreds of billions within ten years. Cars (for navigation or automated driving), home appliances (for automation, security), devices on the national power grid (monitoring and error correction), smart buildings (temperature or humidity control, security), smart cities (traffic control, services supply, waste management), wearable and implanted medical devices, and so on.
The characteristics of such devices are often quite different from today’s computers on the internet. The data rate may be very low, and often but not necessarily the data may be required only for local networks, rather than full internet availability. The devices or their controllers may have internet interfaces, but they may not obey other internet protocols, and would possibly need to be left in place for years, or decades.
They may not be able to carry out sophisticated security operations themselves, yet ensuring they are secure will be crucial if they are not to become a vast vulnerable network of potential points of entry for hostile actors. It is the Things on the internet of the future, rather than typical computing devices, that may prompt a radical re-think of the ways the internet works.
The impact of the internet on our way of life in its first 40 years has been immeasurable. It has expanded and developed in a way none of us envisaged in 1975. While we may have a better idea of what to expect over the next couple of decades, I am sure most of us will be mistaken.