Daily Bulletin

  • Written by Giovanni Navarria, Lecturer and Research Fellow, Sydney Democracy Network, School of Social and Political Sciences (SSPS), University of Sydney

This essay is the first of a three-part series, which commemorates the anniversary of the first ever message sent across the ARPANET, the progenitor of the Internet on October 29, 1969

In the late hours of October 29, 1969, an apparently insignificant experiment carried out in a lab in the University of California in Los Angeles (UCLA) would spark a revolution, the implications of which are still unfolding nearly five decades later.

It was a rather unusual revolution. Firstly, it was peaceful and harmless. Unlike the three “days of rage” demonstrations organised by the Weatherman group earlier that month in Chicago, that evening in Los Angeles saw no violence, let alone any sign of rage; no one was arrested, the streets remained quiet and all windows were left intact. The police were never even aware of what was going on at UCLA.

The experiment had nothing to do with politics. The two unlikely rebels were not the ringleaders of a radical political organisation, but rather, two University researchers: Leonard Kleinrock, a Professor of Computer Science, and Charley Kline, one of his post-graduate students.

Far from sharpening knives and devising complicated plots to overthrow the world’s social order, the two spent the evening rather quietly, shut inside their lab, focusing all of their energy on what appeared to be a complex, yet merely technical problem: how to establish a communication link between two computers hundreds of miles apart, with one located at UCLA and the other at the Stanford Research Institute (SRI).

That two computers can “talk” to each other and exchange information is the precisely kind of magic we are liable to take for granted in today’s world of technological marvels. Our precious smartphones receive and send data all the time. These processes are often so automatic that we remain (happily) oblivious to them. But in 1969, “communication” between machines was still very much an unsolved puzzle, at least until the fateful night of October 29.

For most of the evening, success eluded the researchers. Only around 10:30pm, after several frustrating attempts, was Kline’s computer finally able to ‘talk’ to its counterpart at SRI. It was, however, a rather stuttered beginning.

image Internet first log Web

Kline was only able to send the “l” and the “o” of the command line ‘login’ before the system crashed and had to be rebooted. Still, even after the communication link between the two computers was finally successfully established later that night, it seems more than fitting that a friendly “Lo”, a common abbreviation of “Hello”, was the first historic message ever sent over the ARPANET, the experimental computer network built at the end of the 1960s to allow researchers from UCLA, SRI, the University of California Santa Barbara (UCSB) and the University of Utah to work together and share resources.

Charley Kline recounts the first experiment

The message marked an important milestone, and though its load was trivial in terms of mere bytes, its significance was immeasurable. It carried with it the seeds of a new age that would soon revolutionise the way in which not only machines, but also people, communicate with each other. With their stuttered “hello”, Kleinrock and Kline accidentally gave birth to a whole new world of possibility. Their experiment represented the first exploration into a brand new communication galaxy, the shape and scope of which was to be determined by each new communication link added to it.

Thank you, Sputnik

The seeds of the ARPANET experiment, however, were sown more than a decade earlier, on October 4, 1957, the very same day the space race began at the Baikonur Cosmodrome in Kazakhstan with the launch of the Soviet satellite Sputnik.

While only a handful of people were aware of the experiment taking place at UCLA, the Baikonur’s launch didn’t go unnoticed. Press agencies around the world recognised the historic moment and went into a frenzy. The Sputnik, only 58 centimetres in diameter and coming in at just over 80 kilograms, was the first artificial object to float into space, beyond the limits of our earth-bound lives.

image Sputnik NASA

The launch was also very significant politically. In the mist of the Cold War, the Sputnik was both a scientific slap in the face for Americans and a new threat facing the West. It was “the smoking gun” that left no doubt that the Soviets were on the road to colonising space. From all the way up there they would soon be able to spy on their enemies without interruption and (the worst of possible nightmares) stealthily drop nuclear bombs on American soil.

It was now clear that, contrary to popular belief, the Russians were no longer behind the Americans in terms of technology. Indeed, it was looking to be rather the opposite. As one of Senator Lyndon Johnson’s aides, George E. Reedy, put it perfectly in November 1957: “It took [the Russians] four years to catch up to our atomic bomb and nine months to catch up to our hydrogen bomb. Now we are trying to catch up to their satellite”.

image New York Times Front Page Web

The Sputnik’s wakeup call sent the US Administration into panic and produced two important consequences: one direct, clear from the beginning, and one unintended, which took several years to materialise.

The first consequence was political: in jump-starting the space race, the Sputnik opened a new front in the Cold War between the Soviet Union and the United States. To catch up with the Russians, President Dwight Eisenhower assigned to the newly established Advanced Research Project Agency (ARPA) the coordination of all Defence Research and Development programs (R&D). However, ARPA’s involvement with the space race didn’t last long. Despite a budget exceeding $2 billion and some initial success with Explorer 1 and Vanguard 1, the first two American satellites sent into space, the US Government decided to set up a new agency, the National Aeronautics and Space Administration (NASA) to maximise their space effort and cope more efficiently with the political pressure spawned by the Sputnik’s feat.

In the summer of ’58, NASA was born. A civil federal agency with the stated mission to ‘plan, direct, and conduct aeronautical and space activities’, it effectively stripped ARPA of all its space and rocket projects. To avoid falling into oblivion, the agency had to quickly reinvent itself and find new goals and new sectors for its projects. With a reduced but still considerable budget, ARPA found the solution to its problems in the brand new sector of pure research (Computer Science) and the visionary leadership of Joseph Carl R. Licklider. The long-term effect of ARPA’s new path was the second (and unintended) consequence of the Sputnik: a new galaxy of communication, first called ARPANET and later, the Internet.

Man–Computer Symbiosis

Licklider, by all accounts a brilliant scientist, strongly believed the future would be shaped by computers that were linked into a network. Computer networks, he argued, would become critical for expanding the potential of human thinking beyond all known limits. Thinking, Licklider reasoned, is often burdened with unnecessary tasks that limit our capacity to really be creative. His argument was informed by the results of an experiment he had conducted in the same year that Sputnik reached space.image J C R Licklider Wikipedia

Entirely focused on his own working routine, the experiment proved that about 85 percent of his “thinking time” was devoted to activities that were not intellectual, but rather purely clerical or mechanical. Much more time, Licklider found out, “went into finding or obtaining information than into digesting it”. If science could find a suitable, more reliable, and faster substitute for the human brain for those clerical activities, Licklider theorised, this would result in an unparalleled enhancement of the quality and depth of our thinking process. If machines could take care of “clerical” activities, humans would have more time and energy to dedicate to “thinking”, “imagining”, creativity and interactivity.

Licklider’s ideas went beyond his era’s traditional conception of computers as big calculators. He envisioned a much more interactive and complex environment in which computers played a role in the natural extension of humanity.

In his seminal 1960 paper Man–Computer Symbiosis, Licklider wrote that, in the near future “human brains and computing machines will be coupled together very tightly”. The resulting symbiosis, he postulated, will “think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today”. By the early 1960s it was clear to Licklider that computers were destined to become an integral part of human life. He was thinking of what he later called, with a certain emphasis, “the intergalactic network”, the perfect symbiosis between computers and humans. The ultimate goal of this symbiosis was to improve significantly the quality of people’s lives.

Time Sharing

Licklider lived in an age when computers were nothing like the one we used today. They were gigantic and expensive, with exorbitant price tags (ranging from $500,000 to several million dollars.)

When he arrived at ARPA on October 1, 1962, Licklider quickly realised that to overcome the unsustainable costs of ARPA funded computers research centres, the centres had to be forced to buy time–sharing computers.

Time-sharing systems were designed to allow multiple users to connect to a powerful mainframe computer simultaneously, and to interact with it via a console that ran their applications by sharing processor time. Before time–sharing systems were adopted, computers, even the most expensive ones, were bound to do jobs serially: one at a time. This meant that computers were often idle as they waited for its users’ input or computation result.

Time-sharing systems, then, guaranteed the most effective use of a computer’s processing power.

If the first step was to force universities to use their funds to buy time-sharing systems, the next step was to allow access to off-site resources via other computers. In other words, to make those recourses available via a network.

The computers we use today, including our small but incredibly powerful smartphones, run simultaneous applications all the time. While I am writing this piece, I can play music, receive email, run a diagnostic on my hard-drive, search a database, and much more. In the pre-Internet world, in the era of expensive mainframe computers, running each task would require a dedicated machine.

Moreover, despite their formidable size and price tag, mainframe computers were only capable of performing a limited number of computational tasks, which was usually tailored to the needs of whoever owned or rented them. If an experiment required a variety of tasks, it would require the use of more than one computer. However, given the prohibitive costs of the hardware, most research centres could not afford more than one machine. So the solution to the problem had to be found elsewhere: resource-sharing via a computer network. But Building such network was by no means a simple task.image IBM CTSS Compatible time sharing system IBM

During the previous decade, the lack of homogeneity in the language of computer programming had created a Babel of multiple systems and debug procedures that slowed the development of computer science. For Licklider, it was clear that the man-computer symbiosis he had envisioned could only materialise after the different systems learned to speak the same language, and after they were integrated into a super-network.

The initial push for time-sharing was instrumental in breeding a new culture among computer scientists. “Networking” centred around the need for common standards to facilitate communication through different systems, became indispensable for time-sharing to be effective. Though initially confined to the elitist realm of computer science, in the long term (and with the spread of the Internet), networking has become the norm in the organising processes of many human activities that we take for granted today.

Authors: Giovanni Navarria, Lecturer and Research Fellow, Sydney Democracy Network, School of Social and Political Sciences (SSPS), University of Sydney

Read more http://theconversation.com/how-the-internet-was-born-a-stuttered-hello-67903

Business News

A Guide to Finance Automation Software

When running a business, it is critical to streamline certain processes to maintain efficiency. Too much to spent manually on tasks can wind up being detrimental to the overall health of the organis...

Daily Bulletin - avatar Daily Bulletin

Top Tips for Cost-effective Storefront Signage

The retail industry is highly competitive and if you are in the process of setting up a retail store, you have come to the right place, as we offer a few tips to help you create a stunning storefront...

Daily Bulletin - avatar Daily Bulletin

How Freight Forwarding Simplifies Global Trade Operations

Global trade operations are becoming increasingly complex due to international regulations, customs procedures, and the sheer scale of global logistics. For businesses looking to expand internation...

Daily Bulletin - avatar Daily Bulletin