The advent of the first electromechanical computers was an exciting time, because these scamps could actually provide significant amounts of computational power. But before we plunge into the fray, we need to consider a few precursor inventions such as the relay and the vacuum tube …

View Topics



 
The Electromechanical Relay
Towards the end of the nineteenth century, when Queen Victoria (1819-1901) still held sway over all she surveyed, the most sophisticated form of control for electrical systems was the electro-mechanical relay.

These devices consisted of a rod of iron (or some other ferromagnetic material) wrapped in a coil of wire. (Although we're talking about them in the past tense here, relays are still employed for a wide variety of applications to this day.) Applying an electrical potential across the ends of the coil caused the iron rod to act like a magnet. The magnetic field could be used to attract another piece of iron acting as a switch [see (a) in the figure below]. Removing the potential from the coil caused the iron bar to lose its magnetism, and a small spring (not shown here) would return the switch to its inactive state [see (b) in the figure below].

An electromechanical relay

Although simple in concept, these devices were to prove extremely important in the context of early electromechanical computers. This is because the output from one relay – or from a number of relays – can be used to control other relays, and the output from these other relays can be used to control yet more relays, and so forth. Thus, by connecting relays together in different ways, it's possible to create all sorts of things, including computers as discussed later in this paper.




 
The Invention of the Vacuum Tube
Thomas Alva Edison Now here's a bit of a poser for you – who invented the first electric light bulb? If your immediate response was "The legendary American inventor, Thomas Alva Edison," then you'd undoubtedly be in the majority, but being in the majority doesn't necessarily mean that you're right. It's certainly true that Edison (1847-1931) did invent a light bulb, but he wasn't the first to do so.

In 1860, an English physicist and electrician, Sir Joseph Wilson Swan, produced his first experimental light bulb using carbonized paper as a filament. Unfortunately, Swan didn't have a strong enough vacuum or sufficiently powerful batteries and his prototype didn't achieve complete incandescence, so he turned his attentions to other pursuits.

Fifteen years later, in 1875, Swan returned to consider the problem of the light bulb and, with the aid of a better vacuum and a carbonized thread as a filament (the same material Edison eventually decided upon), he successfully demonstrated a true incandescent bulb in 1878 – a year earlier than Edison (to be fair, we should also note that there were a number of other contenders prior to Swan). Furthermore, in 1880, Swan gave the world's first large-scale public exhibition of electric lamps at Newcastle, England.

So it's reasonable to wonder why Edison received all of the credit, while Swan was condemned to obscurity. The more cynical among us may suggest that Edison was thrust into the limelight because many among us learn their history through films, and the vast majority of early films were made in America by patriotic Americans. However, none of this should detract from Edison who, working independently, experimented with thousands of filament materials and expended tremendous amounts of effort before discovering carbonized thread. It is also probably fair to say that Edison did produce the first commercially viable light bulb.

The term limelight mentioned above comes from the incandescent light produced by a rod of lime bathed in a flame of oxygen and hydrogen. At the time it was invented (circa 1820), limelight was the brightest source of artificial light known. One of its first uses was for lighting theater stages, and actors and actresses were keen to position themselves "in the limelight" so as to be seen to their best effect.

In 1879, Edison publicly exhibited his incandescent electric light bulb for the first time. Edison's light bulbs employed a conducting filament mounted in a glass bulb from which the air was evacuated, leaving a vacuum. Passing electricity through the filament caused it to heat up enough to become incandescent and radiate light, while the vacuum prevented the filament from oxidizing and burning up.

But we digress... In 1883, William Hammer (an engineer working for the American inventor Thomas Alva Edison) observed that he could detect electrons flowing from the lighted filament to a metal plate mounted inside an incandescent light bulb. Known (rather unfairly) as the Edison Effect, this phenomena was subsequently used to create a vacuum tube rectifier in 1904 by the English electrical engineer, John Ambrose Fleming (1849-1945). This device, which Fleming called a thermionic valve, was also referred to as a vacuum diode (where the term "diode" was used because it had two terminals).

Vacuum diodes were soon used in radio receivers to convert alternating current (AC) to direct current (DC), and also to detect radio frequency signals. Unfortunately, Fleming didn’t fully appreciate the possibilities inherent in his device, and it was left to the American inventor Lee de Forest (1873-1961) to take things to the next stage.

In 1907, de Forest conceived the idea of placing an open-meshed grid between the cathode (the heated filament) and the positively-biased anode (called the plate). By applying a small voltage to the grid in his Audion Tube (which became known as a triode because it had three terminals), de Forest could cause a much larger voltage change to be generated at the plate.

This was extremely significant for the fledgling radio industry, because it became possible to amplify radio signals captured by the antenna before passing them to the detector stage, which made it possible to use and detect much weaker signals over much larger distances than had previously been possible. (De Forest presented the first live opera broadcast and the first news report on radio.)

Collectively, diodes, triodes, and their later cousins are referred to as vacuum tubes. (They are also known as valves in England. This is based on the fact that they can be used to control the flow of electricity, similar in concept to the way in which their mechanical namesakes are used to control the flow of fluids.) Vacuum tubes revolutionized the field of broadcasting, but they were destined to do much more, because their ability to act as switches was to have a tremendous impact on digital computing.

And finally (for this topic), if you ever happen to be in Dearborn, Michigan, USA, you should take the time to visit the Henry Ford Museum, because this happens to contain one of the world's largest collections of light bulbs and vacuum tubes (the author has been there and it's GREAT).




 
Vannevar Bush and the Differential Analyzer
In 1927, with the assistance of two colleagues at MIT, the American scientist, engineer, and politician Vannevar Bush (1890-1974) designed an analog computer that could solve simple equations. This device, which Bush dubbed a Product Intergraph, was subsequently built by one of his students.

Bush continued to develop his ideas and, in 1930, built a bigger version, which he called a Differential Analyzer. The Differential Analyzer was based on the use of mechanical integrators that could be interconnected in any desired manner. To provide amplification, Bush employed torque amplifiers, which were based on the same principle as a ship’s capstan. The final device used its integrators, torque amplifiers, drive belts, shafts, and gears to measure movements and distances (not dissimilar in concept to an automatic slide rule).

Although Bush’s first Differential Analyzer was driven by electric motors, its internal operations were purely mechanical. In 1935 Bush developed a second version, in which the gears were shifted electro-mechanically and which employed paper tapes to carry instructions and to set up the gears.

In our age, when computers can be constructed the size of postage stamps, it is difficult to visualize the scale of the problems that these early pioneers faced. To provide some sense of perspective, Bush’s second Differential Analyzer weighed in at a whopping 100 tons! In addition to all of the mechanical elements, it contained 2000 vacuum tubes, thousands of relays, 150 motors, and approximately 200 miles of wire. As well as being a major achievement in its own right, the Differential Analyzer was also significant because it focused attention on analog computing techniques, and therefore detracted from the investigation and development of digital solutions for quite some time.




 
George Stibitz and the Complex Number Calculator
Despite Vannevar Bush’s achievements, not everyone was enamored by analog computing. In 1937, George Robert Stibitz (1904-1995), a scientist at Bell Laboratories, built a digital machine based on relays, flashlight bulbs, and metal strips cut from tin cans. Stibitz’s machine, which he called the “Model K” (because most of it was constructed on his kitchen table), worked on the principle that if two relays were activated they caused a third relay to become active, where this third relay represented the sum of the operation. For example, if the two relays representing the numbers 3 and 6 were activated, this would activate another relay representing the number 9. (A replica of the Model K is on display at the Smithsonian.)

Stibitz went on to create a machine called the Complex Number Calculator, which, although not tremendously sophisticated by today’s standards, was an important step along the way. In 1940, Stibitz performed a spectacular demonstration at a meeting in New Hampshire. Leaving his computer in New York City, he took a teleprinter to the meeting and proceeded to connect it to his computer via telephone. In the first example of remote computing, Stibitz astounded the attendees by allowing them to pose problems, which were entered on the teleprinter’s keypad; within a short time, the teleprinter presented the answers generated by the computer.




 
Konrad Zuse and the Z2, Z3, and Z4
Many encyclopedias and other reference works state that the first large-scale automatic digital computer was the Harvard Mark 1, which was developed in America between 1939 and 1944 (see also the discussions in the following topic). However, in the aftermath of World War II, it was discovered that a full-fledged, program-controlled electromechanical computer called the Z3 had been completed in Germany in 1941, which means that the Z3 pre-dated the Harvard Mark I.

The Z3's architect was a German engineer called Konrad Zuse (1910-1995). As discussed elsewhere on this site (see The First Mechanical Computers paper on the More Cool Stuff page), Zuse had a fully working mechanical computer called the Z1 running in his parents' living room in Berlin as early as 1938.

Following the creation of his Z1 machine, Zuse became interested in the use of relays. In order to experiment, he created a machine called the Z2. This had the same type of mechanical memory as the Z1, but the arithmetic and control units were constructed using approximately 800 old relays that Zuse acquired from telephone companies. Unfortunately, photos and plans of the Z2 were destroyed by allied air raids during the war. However, the Z2 served its purpose, because it convinced Zuse that relays were reliable enough for computing applications.

Helped by friends and with a little support from the German government, Zuse constructed his Z3 computer in Berlin between 1939 and 1941. Like his previous machines, the Z3 was based on binary floating-point representations and was freely programmable. Unlike the Z1 and Z2, however, the Z3 was constructed completely out of relays (approximately 600 for the arithmetic unit and 1,800 for the memory and control units).

It is interesting to note that paper was in short supply in Germany during the war, so instead of using paper tape, Zuse was obliged to punch holes in old movie film to store his programs and data. We may only speculate as to the films Zuse used for his hole-punching activities; for example, were any first-edition Marlene Dietrich (1901-1992) classics on the list? (Marlene Dietrich fell out of favor with the Hitler regime when she emigrated to America in the early 1930s, but copies of her films would still have been around during the war.)

The end result was the first freely programmable, binary, floating-point, general-purpose electromechanical computer in the world! Once again, this was an absolutely staggering achievement for one man. (You guessed it – the original Z3 was destroyed by bombing in 1944 and therefore didn't survive the war, but a Z3 was reconstructed in the 1960s.)

In 1943, Zuse started work on a general-purpose relay-based computer called the Z4. Unlike its predecessors, the Z4 did survive the war (hidden in a cave in the Bavarian Alps). By 1950, the Z4 had been sold to, and was up and running in, a bank in Zurich, Switzerland, which makes the Z4 the world’s first commercially available computer.




 
Howard Aiken and the Harvard Mark 1
Howard Aiken Although Konrad Zuse’s Z1, Z2, and Z3 machines (presented in the previous topic) pre-dated most of what was happening in the rest of the world, no one in America knew anything about them at the time. It is for this reason that many consider that the modern computer era commenced with a machine that was developed in America between 1939 and 1944.

This device, the brainchild of a Harvard graduate, Howard H. Aiken (1900-1973), was officially known as the IBM automatic sequence controlled calculator (ASCC), but is more commonly referred to as the Harvard Mark I. The Mark I was constructed out of switches, relays, rotating shafts, and clutches, and was described as sounding like "a roomful of ladies knitting." The machine contained more than 750,000 components, was 50 feet long, 8 feet tall, and weighed approximately 5 tons!

The Mark I’s architecture was significantly different from that of modern machines. The device consisted of many calculators that worked on parts of the same problem under the guidance of a single control unit. Instructions were read in on paper tape, data was provided separately on punched cards, and the device could perform operations only in the sequence in which they were received. This machine was based on numbers that were 23 digits wide – it could add or subtract two of these numbers in three-tenths of a second, multiply them in four seconds, and divide them in ten seconds.

The Harvard Mark 1

Aiken was tremendously enthused by computers, but, like so many others, he didn’t anticipate the dramatic changes that were to come. In 1947, for example, he predicted that only six electronic digital computers would be required to satisfy the computing needs of the entire United States. Although this may cause a wry chuckle today, it is instructive because it accurately reflects the general perception of computers in that era. In those days, computers were typically considered only in the context of scientific calculations and data processing for governments, large industries, research establishments, and educational institutions. It was also widely believed that computers would always be programmed and used only by experts and intellectual heroes (if only they could see us now).



Note: The material presented here was abstracted and condensed from The History of Calculators, Computers, and Other Stuff document provided on the CD-ROM accompanying our book How Computers Do Math (ISBN: 0471732788).