Things really start to get interesting when we come to the concept of microprocessors, because these little rapscallions provide us with the ability to create an incredible range of products that would otherwise be impossible. But first we need to consider the precursor technologies of transistors and integrated circuits …

View Topics



 
The First Transistors
The transistor and subsequently the integrated circuit must certainly qualify as two of the greatest inventions of the twentieth century. These devices are formed from materials known as semiconductors, whose properties were not well understood until the 1950s. However, as far back as 1926, Dr. Julius Edgar Lilienfield (1881-1963), who was originally born in Austro-Hungary, but who emigrated to America, filed for a patent on what we would now recognize as an NPN junction transistor being used in the role of an amplifier (the title of the patent was “Method and apparatus for controlling electric currents”).

Unfortunately, serious research on semiconductors didn’t really commence until World War II. At that time it was recognized that devices formed from semiconductors had potential as amplifiers and switches, and could therefore be used to replace the prevailing technology of vacuum tubes, but that they would be much smaller, lighter, and would require less power. All of these factors were of interest to the designers of the radar systems which were to play a large role in the war.

Bell Laboratories in the United States began research into semiconductors in 1945, and physicists William Shockley (1910-1989), Walter Brattain (1902-1987) and John Bardeen (1908-1987) succeeded in creating the first point-contact germanium transistor on the 23rd December, 1947. (They took a break for the Christmas holidays before publishing their achievement, which is why some reference books state that the first transistor was created in 1948.) In 1950, Shockley invented a new device called a bipolar junction transistor (BJT), which was more reliable, easier and cheaper to build, and gave more consistent results than point-contact devices. (Apropos of nothing at all, the first TV dinner was marketed by the C.A. Swanson company three years later.)

By the late 1950s, bipolar transistors were being manufactured out of silicon rather than germanium (although germanium had certain electrical advantages, silicon was cheaper and easier to work with). Bipolar junction transistors are formed from the junction of three pieces of doped silicon called the collector, base, and emitter. The original bipolar transistors were manufactured using the mesa process, in which a doped piece of silicon called the mesa (or base) was mounted on top of a larger piece of silicon forming the collector, while the emitter was created from a smaller piece of silicon embedded in the base.

In 1959, the Swiss physicist Jean Hoerni (1924-1997) invented the planar process, in which optical lithographic techniques were used to diffuse the base into the collector and then diffuse the emitter into the base. One of Hoerni's colleagues, Robert Noyce (1927-1990), invented a technique for growing an insulating layer of silicon dioxide over the transistor, leaving small areas over the base and emitter exposed and diffusing thin layers of aluminum into these areas to create wires. The processes developed by Hoerni and Noyce led directly to modern integrated circuits.

In 1962, Steven Hofstein and Fredric Heiman at the RCA research laboratory in Princeton, New Jersey, invented a new family of devices called metal-oxide semiconductor field-effect transistors (MOS FETs for short). Although these transistors were somewhat slower than bipolar transistors, they were cheaper, smaller and used less power. Also of interest was the fact that modified metal-oxide semiconductor structures could be made to act as capacitors or resistors.




 
The First Integrated Circuits
To a large extent the demand for miniaturization was driven by the demands of the American space program. For some time people had been thinking that it would be a good idea to be able to fabricate entire circuits on a single piece of semiconductor. The first public discussion of this idea is credited to a British radar expert, G.W.A. Dummer, in a paper presented in 1952. However, it was not until the summer of 1958 that Jack St. Clair Kilby (1923-2005), working for Texas Instruments, succeeded in fabricating multiple components on a single piece of semiconductor. Kilby's first prototype was a phase shift oscillator comprising five components on a piece of germanium half an inch long and thinner than a toothpick. Although manufacturing techniques subsequently took different paths to those used by Kilby, he is still credited with the creation of the first true integrated circuit.

Around the same time that Kilby was working on his prototype, two of the founders of Fairchild Semiconductors – the Swiss physicist Jean Hoerni and the American physicist Robert Noyce – were working on more efficient processes for creating these devices. Between them, Hoerni and Noyce invented the optical lithographic techniques that are now used to create transistors, insulating layers, and interconnections on integrated circuits.

By 1961, Fairchild and Texas Instruments had announced the availability of the first commercial planar integrated circuits comprising simple logic functions. This announcement marked the beginning of the mass production of integrated circuits. In 1963, Fairchild produced a device called the 907 containing two logic gates, each of which consisted of four bipolar transistors and four resistors. The 907 also made use of isolation layers and buried layers, both of which were to become common features in modern integrated circuits.

In 1967, Fairchild introduced a device called the Micromosaic, which contained a few hundred transistors. The key feature of the Micromosaic was that the transistors were not initially connected to each other. A designer used a computer program to specify the function the device was required to perform, and the program determined the necessary transistor interconnections and constructed the photo-masks required to complete the device. The Micromosaic is credited as the forerunner of the modern application-specific integrated circuit (ASIC), and also as the first real application of computer-aided design. In 1970, Fairchild introduced the first 256-bit static RAM called the 4100, while Intel announced the first 1024-bit dynamic RAM, called the 1103, in the same year.




 
Hindsight, the One Exact Science
With the benefit of hindsight, the advent of the microprocessor appears to have been an obvious development. However, this was less than self-evident at the time for a number of reasons, not the least that computers of the day were big, expensive, and a complete pain to use. Although these reasons would appear to support the development of the microprocessor, by some strange quirk of fate they actually worked to its disfavor.

Due to the fact that computers were so big and expensive, only large institutions could afford them and they were only used for computationally intensive tasks. Thus, following a somewhat circular argument, popular opinion held that only large institutions needed computers in the first place. Similarly, due to the fact that computers were few and far between, only the chosen few had any access to them, which meant that only a handful of people had the faintest clue as to how they worked. Coupled with the fact that the early computers were difficult to use in the first place, this engendered the belief that only heroes (and heroines) with size-sixteen turbo-charged brains (the ones with “go-faster” stripes on the sides) had any chance of being capable of using them at all. Last but not least, typical computers of the day required many thousands of transistors and the thrust was toward yet more powerful computers in terms of raw number-crunching capability, but integrated circuit technology was in its infancy and it wasn’t possible to construct even a few thousand transistors on a single integrated circuit until the late 1960s, at which point things really started to get interesting …




 
Intel’s 4004
Based on the discussions in the previous topic, it would appear that the (potential) future of the (hypothetical) microprocessor looked somewhat bleak, but fortunately other forces were afoot. Although computers were somewhat scarce in the 1960s, there was a large and ever-growing market for electronic desktop calculators. In 1969, the Japanese calculator company Busicom approached Intel with a request to design a set of twelve integrated circuits for use in a new calculator. The task was presented to one Marcian “Ted” Hoff (1937-), a man who could foresee a somewhat bleak and never-ending role for himself designing sets of special-purpose integrated circuits for one-of-a-kind tasks. However, during his early ruminations on the project, Hoff realized that rather than design the special-purpose devices requested by Busicom, he could create a single integrated circuit with the attributes of a simple-minded, stripped-down, general-purpose computer processor.

During the fall of 1969, Hoff worked with applications engineer Stan Mazor (1941-) to develop the architecture for a chip with a 4-bit central processing unit. In April 1970, Federico Faggin (1941-) came to work at Intel, and undertook the task of translating Hoff and Mazor’s architecture into silicon.

The result of Hoff’s inspiration was the world’s first microprocessor, the 4004, where the ‘4’s were used to indicate that the device had a 4-bit data path. The 4004, which first saw the light of day in 1971, was part of a four-chip system which also consisted of a 256-byte ROM, a 32-bit RAM, and a 10-bit shift register. The 4004 itself contained approximately 2,300 transistors and could execute 60,000 operations per second. The advantage (as far as Hoff was concerned) was that by simply changing the external program, the same device could be used for a multitude of future projects.

Knowing how pervasive microprocessors were to become, you might be tempted to imagine that there was a fanfare of trumpets and Hoff was immediately acclaimed to be the master of the known universe, but such was not to be the case. The 4004 was so radically different from what Busicom had requested that they didn’t immediately recognize its implications (much as if they’d ordered a Chevy Cavalier, which had suddenly transmogrified itself into an Aston Martin), so they politely said that they weren’t really interested and could they please have the twelve-chip set they’d originally requested (they did eventually agree to use the fruits of Hoff’s labors).




 
Was the 4004 Really the First?
Of course nothing is simple. In February 1968, the International Research Corporation based in San Martin, California, developed its own architecture for a computer-on-a-chip modeled on an enhanced PDP-8/S concept.

Similarly, in May 1968, Wayne Pickette, working at International Research, made a proposal to Fairchild Semiconductor that they develop his design for a computer-on-a-chip (Fairchild turned him down).

And in December 1970, Gilbert Hyatt filed a patent application entitled "Single Chip Integrated Circuit Computer Architecture." Hyatt's patent application started wending its way through the system a year before Hoff, Mazor, and Faggin created the 4004, which was certainly the first commercially viable microprocessor. However, Hyatt's patent wasn't actually granted until 1990 (US Patent 4,942,516), by which time Hoff, Mazor, and Faggin were almost universally credited with the invention of the microprocessor.

To add insult to injury, Texas Instruments succeeded in having the Hyatt patent overturned in 1996 on the basis that the device it described was never implemented and was not implementable with the technology available at the time of the invention (this ruling is still subject to appeal, so watch this space).




 
Intel’s 4040, 8008, and 8080
In November 1972, Intel introduced the 8008, which was essentially an 8-bit version of the 4004. The 8008 contained approximately 3,300 transistors and was the first microprocessor to be supported by a high-level language compiler called PL/M. The 8008 was followed by the 4040, which extended the 4004’s capabilities by adding logical and compare instructions and by supporting subroutine nesting using a small internal stack.

However, the 4004, 4040, and 8008 were all designed for specific applications, and it was not until April 1974 that Intel presented the first true general-purpose microprocessor, the 8080. This 8-bit device, which contained around 4,500 transistors and could perform 200,000 operations per second, was destined for fame as the central processor of many of the early home computers.




 
The 6800, 6502, and Z80
Following the 8080, the microprocessor field exploded with devices such as the 6800 from Motorola in August 1974, the 6502 from MOS Technology in 1975, and the Z80 from Zilog in 1976 (to name but a few).

Unfortunately, documenting all of the different microprocessors would require a book in its own right, so we won’t even attempt the task here. Instead, we’ll create a cunning diversion that will allow us to leap gracefully into the next topic ...... Good grief! Did you see what just flew past your window?



Note: The material presented here was abstracted and condensed from The History of Calculators, Computers, and Other Stuff document provided on the CD-ROM accompanying our book How Computers Do Math (ISBN: 0471732788).