Skip to contentSkip to navigationSkip to topbar
Rate this page:
On this page

Microprocessor


A Microproccessor is an integrated circuit whose function is almost infinitely changeable through programming.


Genesis

genesis page anchor

In the 1950s and 1960s, computers were large. Mainframe machines filled entire rooms. Minicomputers were smaller but still chunky, though there were a few desktop models. All of them were constructed out of function-specific modules — units for performing arithmetic and logic tasks, memory managers, input/output controllers, for example — each of which was constructed out of integrated circuits (ICs, the first 'silicon chips') and discrete components.

The first commercial Microprocessor, Intel's 4004, launched in November 1971. It was a single integrated circuit like those used in computer modules but of far greater complexity. It compressed a mainframe or mini's entire Central Processing Unit onto a tiny sliver of silicon. It required separate memory, IO and program storage chips, but it was architecturally no different from the 'big iron' running company payrolls, handling airline bookings, logging data in labs, or monitoring nations' defences.


The Microprocessor would come to dominate computing. Today, all computers, from the largest super computer to the smallest Internet of Things (IoT) device, derive their computing capacity from Microprocessors.

The irony is that Microprocessors were never intended to be used in computers. The first Microprocessors, developed simultaneously by Intel and Texas Instruments, were attempts to simplify application-specific chipsets by replacing fixed-function components with programmable ones. "How can I replace dozens of chips with just a handful?" these two companies were asked by their clients. The solution: create a general-purpose platform that could be programmed to deliver any desired application's functionality.


It was amateur computing buffs who latched onto the Microprocessor as the basis for cheap computers. Enthusiasts in the US, the UK, and other countries couldn't afford minicomputers, even small ones, so they had to build their own. Some parts were taken from machines discarded by businesses and institutions. Other modules were hand-assembled. But the new Microprocessor technology allowed these folks to construct computers with much fewer parts and much more cheaply.

That's why Steve Wozniak built the Apple I. It was his personal hobby machine — until Steve Jobs realized it could be sold to enthusiasts who lacked Woz' electronics skills. MITS' Altair kit, featured on the January 1975 cover of Popular Electronics magazine, was their inspiration, as it was for other 'micro' pioneers. In the UK, John Miller-Kirkpatrick, Tim Moore, Chris Shelton, and Mike Fischer designed machines they or their clients would commercialize.


That was in the mid-1970s. Within a couple of years, on both sides of the Atlantic, enthusiasts were becoming entrepreneurs and offering their designs as commercials systems, first as single-board computers — naked but complete circuit boards — then as packaged computers with integrated keyboards. Then came the killer applications, then the graphical user interface (GUI), then the OS wars — Windows vs Mac OS. By the 1990s, all personal computing was done on microprocessor-based systems. Many of PCs in use in business were linked over networks to microprocessor-based servers. Out went the minicomputer. The servers were soon connected to other servers, and data centers ditched mainframes for racks of microprocessor-filled units. Today, the entire Internet runs off them, even the routers that manage the massive flow of data around the global network. Mainframes are museum pieces.

Today there are billions, possible trillions, of microprocessors in use all over the world, in almost every electronic device. The Microprocessor has achieved this is its first 50 years. What will the next 50 years hold for this truly revolutionary technology?


Rate this page: