Our previous installment described the rise of automatic telephone switches, and the of complex relay circuits to control them. Now we shall see how scientists and engineers developed such relay circuits into the first, lost, generation of digital computers.
The Relay at Zenith
Recall to mind the relay, based on the simple idea that an electromagnet could operate a metal switch. It was conceived independently several times by natural philosophers and telegraph entrepreneurs in the 1830s. The inventors and mechanics of the middle century then forged it into a robust and essential component of telegraph networks. But it reached the zenith of its development in the telephone networks, miniaturized and diversified into a myriad of forms by a new breed of engineers with formal training in mathematics and physics.
Not only the automatic switching systems, but virtually all of the equipment in the telephone networks of the early twentieth century involved relays in some fashion. Their earliest use was in the ‘drops’ on the manual switchboards that first appeared in the late 1870s. When a caller cranked the magneto on their phone to signal the central office, it triggered a drop relay, allowing a metal shutter on the switchboard to fall, thus indicating the incoming call to the operator. When the operator inserted her jack into the plug, it reset the relay, allowing her to lift the shutter back up to be held in place by the reactivated magnet.
By 1924, two Bell engineers wrote, a typical manual office, serving 10,000 subscribers, “would have from 40,000 to 65,000 relays” with a combined magnetic strength “sufficient to lift ten tons.” A large machine-switched office would have double that number. The U.S. telephone system taken as a whole, then, was a tremendous machine of many millions of relays, growing in number all the time as Bell steadily automated office after office. The connection of a single call would engage from a handful up to several hundred of them, depending on the number and types of offices involved.
An endless menagerie of relays marched forth from the factories of Western Electric, Bell’s manufacturing arm, to supply the needs of this vast system. Bell engineers bred varieties enough to sate the most jaded pigeon fancier or kennel club. They optimized for speed, sensitivity, and small size. In 1921, Western Electric produced almost five million of the things, in 100 main types. Most common was the generalist Type E, a flat, roughly rectangular device weighing a few ounces, made primarily of stamped-metal parts for ease of manufacturing. A casing around the relay body protected the contacts from dust and the electrical circuits from interference from their neighbors – the relays were typically tightly packed together by the hundreds and thousands in towering racks at the central offices. The type E was made in 3,000 different variants, each with its special configuration of wire windings and contacts.1
Soon these relays were incorporated into the most complex switching circuit yet known.
In 1910, Gotthilf Betulander had an idea. Betulander was an engineer at Royal Telegrafverket, a state-owned corporation that controlled most (and within a decade virtually all) of the Swedish telephone market. He believed he could greatly improve the efficiency of Telegrafverket’s operations by building an automatic switching system entirely of relays: a matrix of relays sitting at each intersection in a lattice of metal bars which connected to the phone lines. It would be faster, more reliable, and easier to maintain than the sliding and rotating contacts then used.
Moreover, Betulander realized that he could separate the selection and connection portions of the system into independent relay circuits. The former would be used only to set up the speech circuit, and then could be freed for use in another call. Betulander had independently arrived what was later dubbed “common control”.
He called the relay circuit that stored the incoming number the recorder (the register, in American parlance), and the circuit that actually found and ‘marked’ an available connection in the lattice the marker. He patented his system and saw a handful put into use in Stockholm and London. Then, in 1918, he heard about an unexploited American innovation, a crossbar switch conceived by Bell engineer John Reynolds five years before. It worked much like his own, but used only n + m relays to serve the n x m junction points of the lattice, making it much more feasible to scale up for larger offices. To make a connection it trapped piano wire ‘fingers’ against a holding bar, allowing the selecting bar to move on to connect another call. By the following year Betulander had integrated this idea into his own switching apparatus.
Most telephone engineers, however, found Betulander’s system strange and excessively complex. When it came time to choose an a switching system for automating Sweden’s major cities, Telegrafverket chose one developed by another native company, Ericsson. The Betulander switch survived only in a scaled-down model adapted to rural markets – because the relays were more reliable than the motor-operated machinery of the Ericsson switch, no dedicated service personnel were required at each office.2
American telephone engineers, however, came to a different conclusion. A 1930 Bell Labs mission to Sweden was “fully convinced of the merits of the crossbar connecting unit”, and upon their return they immediately set to work on what came to be known as the “No. 1 crossbar system” to replace the panel switch in large cities.3 By 1938, two were installed in New York City, and they soon became the switching system of choice for urban markets until the arrival of electronic switching over thirty years later.
For our purposes, the most interesting component of the No. 1 crossbar was the new, more sophisticated marker developed by Bell. Its job was to find an idle path from the caller to the callee through the multiple linked crossbar units that formed the exchange. It had to test the idle and busy state of each connection. This required conditional logic. As telephone historian Robert Chapuis wrote4:
Selection is conditional, since an idle link is retained only if it gives access to a crossbar which has as its outlet an idle link to the next stage. When several sets of links meet the conditions, a ‘preferential logic’ singles out one [consisting] of the links which are the lowest numbered…
The crossbar switch is a beautiful case study in the cross-fertilization of technological ideas. Betulander conceived of his all-relay switch, then improved it with Reynolds’ switching fabric, and proved it could work. AT&T engineers then re-absorbed this hybrid creation, made their own refinements, and thus produced the No. 1 crossbar system. This system then itself became a component of two early computing machines, one of which inspired a landmark paper in the history of computing.
In order to understand how and why relays and their electronic cousins helped bring about a revolution in computing, we must take a brief detour into the world of mathematical labor.5 We will then understand the latent demand for a better way to calculate.
By the early twentieth century, the entire edifice of modern science and engineering rode upon the backs of thousands of mathematical laborers, known as computers. Charles Babbage had recognized the problem as far back as the 1820s, and thus proposed his difference engine (and even he had antecedents). His primary concern was to automate the construction of mathematical tables, such as those used for navigation (e.g. computing the value of the trigonometric functions by polynomial approximations at 0 degrees, 0.01 degrees, 0.02 degrees, etc.). The other main demand on computational labor at that time came from astronomers: to process raw telescope observations into fixed positions on the celestial sphere (based on the time and date when they were made), or to determine the orbit of a new object (such as Halley’s Comet).
Since Babbage’s time, the problem had grown much more acute. Electrical power companies wanted to understand the behavior of transmission systems with extremely complex dynamic properties. Bessemer-steel guns that could throw a shell over the horizon (and whose aim, therefore, could no longer be corrected by eye) created the demand for ever more precise ballistics tables. New, computationally-intensive statistical tools (such as the method of least-squares) spread across the sciences, and into the ever-growing bureaucracies of the modern state. Offices devoted to computation, like telephone offices usually staffed largely or entirely by women, sprung up in universities, government departments, and industrial corporations.
Mechanical calculators did exist, but they only ameliorated the labor problem, they did not eliminate it. They could perform simple arithmetic operations quickly, but any interesting science or engineering problem involved many hundreds or thousands of such operations, each of which the (human) computer had to enter by hand, carefully transcribing all of the intermediate results.
There were several forces that helped bring about new approaches to the problem of mathematical labor. Young scientists and engineers grinding out their own calculations late into the night sought relief for their cramping hands and drooping eyelids. Program administrators felt the pinch of the ever-increasing wages for their platoons of computers, especially after World War I. Finally, many problems at the frontiers of science and engineering were intractable to manual calculation. One series of machines that arose from these pressures was overseen by an electrical engineer at the Massachusetts Institute of Technology (MIT) named Vannevar Bush.
Our story, in recent chapters so often impersonal, even anonymous, now returns to the realm of personality. Fame is partial with her favor, and has not seen fit to bestow any upon the creators of the panel switch, the type E relay, the crossbar marker circuit. There are no biographical anecdotes we can summon to illuminate the lives of these men; the only readily available remains of their lives are the stark fossils of the machines they created.6
Now, however, we are to deal with men lucky enough to have left a deeper impression in the record of the past – not just bones, but feathers. But we will no long meet individuals toiling away in their private attic or workshop – a Morse and Vail, or Bell and Watson. By the end of World War I, the era of the heroic inventor was largely over. Thomas Edison is, by convention, the transitional figure – inventor-for-hire at the start of his career, proprietor of an “invention factory” by its end.7 Institutions – universities, corporate research departments, government labs – now governed the development of most significant new technologies. The men we will meet in the remainder of this chapter all belonged to such institutions.
One of them was the aforementioned Vannevar Bush, who arrived at MIT in 1919, at the age of 29. Just over twenty years later he joined the chief directors of the American effort in the Second World War, and helped to unleash a flood of government spending that would permanently transform the relationship between government, academia, and the development of science and technology. But for our current purposes, what matters is the series of machines developed in his lab to attack the problem of mathematical labor, from from the mid-1920s onward.8.
MIT, newly relocated from central Boston to the banks of the Charles in Cambridge, was closely attuned to the needs of industry. Bush himself had his hand in several business ventures in the field of electronics, in addition to his professorship. It should come as no surprise, then, that the problem that initially drove Bush and his students to build a new computing device came from the electrical power industry: modeling the behavior of long-distance power lines under sudden loads. But it was obvious that this was only one of many possible applications: tedious mathematical labor was everywhere around them.
Bush and his collaborators first built two machines that they called product integraphs. But the best-known and most successful of the MIT machines was the third, the differential analyzer, completed in 1931. In addition to power transmission problems, it computed electron orbits, the trajectories of cosmic rays in the earth’s magnetic field, and much more. Researchers across the world, hungry for computational power, built dozens of copies or variants throughout the 1930s, some of them even constructed from Meccano (British equivalent to the American Erector Set).
The differential analyzer was an analog computer. It computed mathematical functions by turning metal rods which represented some quantity by the direct physical analog of their rotational velocity. A motor drove the independent variable rod (typically representing time), driving the other rods (representing various derived variables) in turn by mechanical linkages that computed a function over the input velocity. The final outputs were plotted as curves on paper. The most important components were the integrators, wheels turned by discs that could compute the integral of a curve without any tedious hand calculation.
None of this machinery involved the abrupt on/off discontinuity of relays, or any kind of digital switch, so one may fairly wonder what all of this has to do with our story. The answer is contained in the fourth machine in MIT’s family of computing machines.
In the early 1930s, Bush began wooing the Rockefeller Foundation for funding for a sequel to the analyzer. Warren Weaver, head of the foundation’s Natural Sciences Division, was at first unconvinced. Engineering was not part of his portfolio. Bush, however, touted the limitless potential of his new machine for scientific applications – especially in mathematical biology, a pet project of Weaver’s. He also promised numerous improvements on the existing analyzer, especially “the possibility of enabling the analyzer to switch rapidly from one problem to another in the manner of the automatic telephone exchange.”9 In 1936 his attentions paid off, in the form of a $85,000 grant to build this new device, which would be known as the Rockefeller differential analyzer.
As a practical calculating device, the Rockefeller analyzer was not a great success. Bush, now Vice President and Dean of Engineering for MIT, had little spare time to oversee its development. In fact, he soon absented himself entirely, to take on the presidency of the Carnegie Institute, in Washington, D.C. He felt the looming shadow of war, and had firm ideas about how science and industry could be made to serve America’s military needs. He wanted, therefore, to be close to the centers of power, where he could better influence matters.
In the mean time, the lab struggled with the technical challenges posed by the new design, and soon became distracted with pressing war work. The Rockefeller machine was not completed until 1942, years behind schedule. The military found its services useful for churning out ballistic firing tables. It was, however, soon overshadowed by the purely digital computers that our story is building towards – those that represented numbers not directly as physical quantities, but abstractly in the positions of switches. It so happens, though, that the Rockefeller analyzer itself contained quite a few such switches, in a relay circuit of its own.
In 1936, Claude Shannon was just twenty years old, but newly graduated from the University of Michigan with a dual degree in electrical engineering and mathematics, when he came across an advertisement on a corkboard. Vannevar Bush was looking for a new research assistant to work on the differential analyzer. He did not hesitate to apply, and soon began working on some of the problems presented by the new analyzer just then taking shape.10
Shannon was a very different sort of man from Bush. He was no man of business, no academic empire builder, no administrator of men and money. He was a life-long lover of games, puzzles, and diversions: chess, juggling, mazes, cryptograms. Like most men of the era, he dedicated himself to serious business during the war – taking a government contract position at Bell Labs that would shield his frail body from the draft.11 His studies on fire-control and cryptography during this time led, in turn, to his seminal paper on information theory (which is outside the scope of the present tale). In the 1950s, though, the war and its consequences behind him, he returned to teach at MIT, spending his spare time on his amusements: a calculator that operated solely in roman numerals; a machine that, when turned on, reached out a hand to switch itself back off.
The Rockefeller machine that he confronted now, although logically similar in structure to the 1931 analyzer, had entirely different physical components. Bush realized that the architecture of rods and mechanical linkages in the older machine prevented its efficient use. It took many hours of work by skilled mechanics to set up a problem before any actual computation could be done.
The new analyzer did away with all that. At its center, replacing the table of mechanical rods, sat a crossbar switch – a surplus prototype donated by Bell Labs. Rather than relying on power transmitted from a central driveshaft, an electric motor (or servo) drove each integrator unit independently. Setting up a new problem was simply a matter of configuring the relays within the crossbar lattice in order to wire up the integrators in the desired pattern. A punched paper tape reader (borrowed from another telecommunications device, the teletypewriter) read in the machine configuration, and a relay circuit translated the signals from the tape reader into control signals for the crossbar – as if it were setting up a series of telephone calls between the integrators.
In addition to being much faster and easier to configure, the new machine operated with greater speed and precision than its predecessor, and could handle more complex problems. Already this computer that we would now consider primitive, even quaint, presented to observers the intimation of some great – or perhaps terrible – mind at work:12
In effect, it is a mathematical robot. an electrically driven automaton which has been fashioned not merely to relieve human brains of the drudgery of difficult calculation and analysis, but actually to attack and solve mathematical problems which are beyond the reach of mental solution.
Shannon focused his attention in particular on the translation of the paper tape into instructions to the ‘brain’, and the relay circuit that carried out this operation. He noticed a correspondence between the structure of that circuit, and the mathematical structures of Boolean algebra, which he had studied in his senior year at Michigan. It was an algebra whose operands were true and false, and whose operators were and, or, not, etc.; an algebra that corresponded to statements of logic.
After spending the summer of 1937 working at Bell Labs in Manhattan – an ideal place to think about relay circuits – Shannon turned this insight into a master’s thesis entitled “A Symbolic Analysis of Relay and Switching Circuits.” Along with a paper by Alan Turing written the year before (discussed briefly below), this formed the first foundation for a science of computing machines.
Shannon found that one could directly translate a system of equations of propositional logic into a physical circuit of relay switches, through a rote procedure. “In fact,” he concluded, “any operation that can be completely described in a finite number of steps using the words if, or, and, etc. can be done automatically with relays.”13 For example, two relay-controlled switches wired in series formed a logical and – current would flow through the main wire only if both electromagnet circuits were activated to close the switches. Likewise two relays in parallel formed an or – current would flow through the main circuit if either electromagnet were activated. The outputs of such logic circuits could, in turn, control the electromagnets of other relays, to make more complex logical operations, such as (A and B) or (C and D).
Shannon concluded the thesis with an appendix containing several example circuits constructed according to his method. Because the operations of Boolean algebra were very similar to arithmetic operations in base two (i.e. using binary numbers), he showed how one could construct from relays an “electric adder to the base two,” – we would call this a binary adder. Just a few months later, another Bell Labs scientist built just such an adder on his kitchen table.
George Stibitz, a researcher in the mathematics department at Bell Labs’ headquarters at 463 West Street in Manhattan, took an odd assortment of equipment home with him on a dark November evening of 1937. Dry battery cells, two tiny switchboard light bulbs, and a pair of flat-construction Type U relays, retrieved from a junk bin. With this, some spare wire, and a few of other bits of scrap, he built in his apartment a device that could sum two single-digit binary inputs (represented as the presence or absence of an input current), and signal the two-digit output on the bulbs: on for one, off for zero.14
Stibitz, trained as a physicist, had been asked to look into the physical properties of relay magnets, and having no prior familiarity with relays in general, began to study up on how Bell used them in its telephone circuits. He soon noticed the similarity between some of those arrangements and the arithmetic of binary numbers. Intrigued, set off on his little kitchen-table side project.
At first, Stibitz’ tinkerings with relays generated little interest among the higher-ups at Bell Labs. Sometime in 1938, however, the head of his research group asked him whether his calculators might be used to do arithmetic on complex numbers (of the form a + bi, where i is the square root of negative one). It turned out that several computing offices within Bell Labs were groaning under the constant multiplication and division of such numbers required by their work. The multiplication of a single complex number required four arithmetic operations on a desk calculator, sixteen for division. Stibitz asserted that yes, he could solve this problem, and set out to design a machine that would prove it.
The final design, fleshed out by veteran telephone engineer Samuel Williams, was dubbed the Complex Number Computer, or Complex Computer for short, and was put into service in 1940. It computed with 450 relays, and stored intermediate results in ten crossbar switches. Input was entered and responses received via teletypewriter, of which three were installed at the Bell Labs departments with the greatest need. Relays, crossbar, teletype: here was a product of the Bell system, through and through.
The Complex Computer had its finest hour on September, 11, 1940. Stibitz gave a paper on his the computer at a meeting of the American Mathematical Society at Dartmouth College, and he arranged for a teletypewriter to be set up at McNutt Hall, linked by telegraph back to the Complex Computer in Manhattan, some 250 miles away. Attendees could step up to the machine, enter a problem at the keyboard, and see the solution typed out as if by magic, less than a minute later. Among the attendees who took at a turn at the teletype were John Mauchly and John von Neumann, each of whom have an important part to play later in our story.
They had seen a brief glimpse of a future world. Because later computers were so expensive,15 administrators could not afford to let them sit idle while the user scratched his chin at the console, pondering what to type in next. Not for another twenty years would computer scientists figure out how to make general-purpose computers appear to be always awaiting your next input, even while working on someone else’s, and it would take nearly another twenty years for this interactive mode of computing to enter the mainstream.
Admirable as the Complex Computer was in its metier, it was by no means a computer in the modern sense. It could carry out arithmetic in complex numbers and perhaps be adjusted to solve other related problems, but it was not designed for general-purpose problem-solving, and had no programmability: it could neither be instructed to perform its operations in arbitrary order, nor instructed to carry them out repeatedly. It was a calculator that happened to perform certain calculations with much greater convenience than its predecessors.
The advent of the Second World War, however, called forth from Bell, with the guidance of Stibitz, a further series computers, later known as Model II through Model VI (the Complex Computer becoming, retroactively, Model I). Most were built at the behest of the National Defense Research Committee, headed by none other than Vannevar Bush, and Stibitz pushed their design towards increasing generality of function and programmability.
The Ballistic Calculator (later Model III), for example, designed to aid in the development of anti-aircraft fire-control systems, went into service in 1944 at Fort Bliss, Texas. It had 1,400 relays, and could execute a program of mathematical operations defined by a sequence of instructions stored on a loop of paper tape. Input data came from a separate tape feed, and it had additional feeds for tabular data – to quickly look up the values of, for instance, trigonometric functions, without having to do an actual calculation. Bell engineers developed special ‘hunting circuits’ to scan forward and back on the tape to find the address of the desired table value, independently of the calculation. Stibitz estimated that, with its relays chattering away day and night, the Model III replaced the work of 25-40 computing ‘girls’.16
The Model V, completed too late to see war service, offered even greater flexibility and power to its users. Measured in terms of the computational labor it replaced, it had roughly ten times the computing capacity of the Model III. With 9,000 relays, it contained multiple computing units fed from multiple input stations where a user could set up a problem. Each such station held one tape reader for the data input and five for the instructions – allowing for the invocation of multiple sub-routines during the execution of the main tape. A master control unit (in effect an operating system) allocated instructions to each of the computing units depending on its availability, and programs could execute conditional branches (i.e. jump to one part of the program or another, depending on the current state of execution). Here was no mere calculator.
Annus Mirabilis: 1937
The year 1937 (roughly speaking)17 stands as a pivotal moment in the history of computing. We have already seen how, in that year, Shannon and Stibitz both noticed a homology between circuits of relays and mathematical functions, an insight which led Bell Labs to design a whole series of important digital machines. In a kind of exaptation – or even transubstantiation – the humble telephone relay had also become an embodied abstraction of mathematics and logic, without changing its physical form.
At the beginning of that same year, British mathematician Alan Turing’s “On Computable Numbers With an Application to the Entscheidungsproblem” appeared in the January 1937 issue of the Proceedings of the London Mathematical Society. In it, he described a universal computing machine: one that could, he argued, perform actions logically equivalent to all those carried out by a human mathematical laborer. Turing, who had arrived at Princeton University the previous year for graduate studies, was also intrigued by relay circuits, and, like Bush, he worried about the looming threat of war with Germany. So he took up a cryptographic side project, a binary multiplier that might be used to encrypt war-time messages, built from relays that he had crafted in the Princeton machine shop.18
Sometime in 1937, Howard Aiken composed a proposal for an “Automatic Calculating Machine.” A Harvard graduate student in electrical engineering, Aiken had ground through his fair share of calculations with the aid only of a mechanical calculator and printed books of mathematical tables. He proposed a design that would eliminate such tedium – unlike existing calculating machinery, it would automatically and repeatedly carry out the processes assigned to it, and use the outputs of previous steps as inputs to the next.19
Meanwhile, outside the English-speaking world, similar developments were in motion. Across the Pacific, Nippon Electric Company (NEC) telecommunications engineer Akira Nakashima had been exploring the relationship between relay circuits and mathematics since 1935. Finally, in 1938, he independently proved the same equivalency between relay circuits and Boolean algebra that Shannon had found the year before.20
In Berlin, Konrad Zuse, a one-time aircraft engineer benumbed by the endless calculations the job required, was looking for funding for his second computing machine. He had never gotten his purely mechanical V121 device to work reliably, so instead he planned to make a relay computer, designed with the aid of his friend, telecommunications engineer Helmut Schreyer.
The ubiquity of the telephone relay, the insights of mathematical logic, the desire of bright minds to escape from dull work, were intertwining to create a vision of a new kind of logic machine.
The Lost Generation
The fruits of 1937 took several years to ripen. War proved a most potent fertilizer, and with its arrival relay computers began to spring up wherever the necessary technical expertise existed. Mathematical logic had provided a trellis across which electrical engineering then stretched its vines, developing new forms of programmable computing machines, the first draft of the modern computer as we know it.
In addition to Stibitz’s machines, the United States could by 1944 boast the Harvard Mark I / IBM Automatic Sequence Controlled Calculator (ASCC), the eventual fruit of Aiken’s 1937 proposal. It was a collaboration between academia and industry gone sour, hence the two names, depending on who is claiming the credit. The Mark I/ASCC used relay control circuits, but followed the structure of IBM’s mechanical calculators for its core arithmetic unit. It was built in time to serve the Navy Bureau of Ships during the war. Its successor, the Mark II, went into operation in 1948 at the Naval Proving Ground, and was fully converted to relay-based operation: it boasted 13,000 of them.22
Zuse built several relay computers during the war, of increasing complexity and sophistication, culminating in the V4, which, like the Bell Model V, included facilities for invoking sub-routines and conditional branching. Due to material shortages, no one in Japan exploited the discoveries of Nakashima and his compatriots until after its post-war recovery. But in the 1950s, the newly formed Ministry of International Trade and Industry (MITI) sponsored two relay machines, the second a 20,000 relay behemoth. Fujitsu, which had collaborated in the construction of these machines, developed its own commercial spin-offs.23
These machines are almost entirely forgotten. There is only one name that has survived: ENIAC. The reason had nothing to do with sophistication or capability, it had everything to do with speed. The computational and logical properties of relays that Shannon, Stibitz, and others had uncovered pertained to any kind of device that could be made to act as a switch. And it just so happened that another such device was available, an electronic switch that could be switched on and off hundreds of times faster than a relay.
The importance of World War II to the history of computing should already be evident. For the rise of electronic computing, it was essential. The outbreak of war on a scale more terrible than any before known unleashed the resources needed to overcome the evident weaknesses of the electronic switch, signaling that the reign of the electro-mechanical computers would be brief. Like the titans, they were to be overthrown by their children. As with the relay, electronic switching was born from the needs of the telecommunications industry. To find out where it came from, we must now rewind our story to the dawn of the age of radio.
via Hacker News http://bit.ly/2r0kj3b