Businessweek Archives

Wonder Chips


Special Report

WONDER CHIPS

They've just rolled that old PC out of your office and brought in your new toy--the latest model based on Intel Corp.'s jazzy new 100-megahertz Pentium microprocessor. It is, you have to admit, a pretty amazing piece of work. There, in a $4,000 box, sits the processing power of a 1988-vintage Cray

Y-MP supercomputer from Cray Research Inc.--plus a lot of neat stuff that big systems never had, including a CD-ROM disk drive and stereo speakers.

What gives your Pentium PC--or a new Macintosh based on the PowerPC chip from Motorola Inc.--such amazing power is the culmination of decades of progress in the science of chipmaking. The microprocessors and memory chips that make these machines multimedia marvels are the latest proof of what's known as Moore's Law. Intel Chairman Gordon E. Moore observed 30 years ago that, by shrinking the size of the tiny lines that form transistor circuits in silicon by roughly 10% a year, chipmakers unleash a new generation of chips every three years--with four times as many transistors. In memory chips, this has quadrupled the capacity of dynamic random-access memories (DRAMs) every three years. In microprocessors, the addition of new circuits--and the speed boost that comes from reducing the distance between them--has improved performance by four or five times every three years since Intel launched its X86 family in 1979.

After a dozen generations of such multiplication, chip performance has snowballed into today's mind-blowing speeds and capacities. And what's astounding is that the cost of this ever-increasing performance is forever going down. Take a 486-class personal computer, the kind you can pick up at the local warehouse club. Among the microprocessor and DRAMs and the other chips inside it, you get roughly 100 million transistors that pack the wallop of an IBM 3090 mainframe from 1985. Yet the PC costs less than $1,000. You can't buy 100 million of anything else for so little. That many sheets of toilet paper would run more than $100,000. This miracle of economics--"free" computing power--created the PC revolution and totally changed the electronics industry.

But the revolution has barely started. Compared with what's coming, the miracles that chipmakers have so far delivered like clockwork will seem like child's play. The compounding effect of Moore's Law has now brought semiconductors to the brink of a series of leaps that within a decade will yield by far the most powerful computers ever--and profoundly affect scores of other electronic products as well, ranging from mainframe-caliber telephones to intelligent TVs. The results will, over the next 20 years, change many aspects of everyday life.

Call it gigachip technology--chips with a billion minute transistors connected by a maze of circuitry. The capacity of memory chips has already climbed from 1 million bits to 16 megabits since 1985, or from 30 pages of typed text to 500. But by 2005, a memory chip should hold 4 billion bits (gigabits). That's almost as much as all the text in two sets of the Encyclopaedia Britannica. And when 64-gigabit memories arrive around 2011, they'll store a small library: 27 Britannicas.

Moreover, the ability to lay a billion transistors on a chip will yield killer microprocessors. As the transistor count on microchips has climbed from 2,300 in 1971 to a few million today, the microprocessor has breezed past the minicomputer and the mainframe. Today, Digital Equipment Corp.'s Alpha chip is even faster than the processor in a Cray Y-MP. "And we don't see anything slowing down these trends," says Richard J. Hollingsworth, DEC's manager of advanced semiconductor development. Gigaprocessors, he adds, "will allow you to do things never thought of before."

Like what? Personal Crays could, for instance, run any software, regardless of which type of computer it was written for. In other words, a single desktop machine in 2002 may be able to emulate any desktop ever built. If you like the interface of a Macintosh but prefer the spreadsheet-handling of Lotus 1-2-3 on a PC, you'll be able to have your cake and eat it, too. Similarly, gigachips would have enough transistors to play multiple roles and turn a computer printer into, say, a copier-fax-modem-printer. So companies that have carved out niche markets could face new competitors, big and small.

POSITIVE SHRINKING. Starting around 2005, second- and third-generation gigachip technology will stuff unprecedented combinations of computing hardware into miniature boxes. The great-grandkids of today's primitive personal digital assistants, for example, may fuse mainframe power, a cellular videophone, and a wireless fax-modem in a flip-open housing the size of a deck of cards. They'll be cheap enough that every purse and briefcase will have one for hopping on the Information Superhighway--to keep up with Wall Street, home, and the office. And order-taking phone systems that now mechanically intone, "Press 1 for information..." could get smart enough to understand spoken questions--and answer so articulately that it will be tough to tell if you're talking to hardware or fellow "wetware."

There will be lots more wonders. Smart TVs will keep track of your viewing preferences and suggest new programs, so you needn't surf through 500 channels. Libraries will house digital books that update themselves by using the global Internet to communicate with other libraries and databases that cover the latest developments in science, the arts, and business.

BETTER BRAINS. Ultimately, gigabit technology could even lead to silicon brains at least as intelligent as humans. "The potential of the computer is way, way beyond human potential," says Hugo de Garis, a researcher who last year moved to the Human Information Processing Research Lab in Kyoto, Japan. The human brain has about 10 quadrillion synapses, or connections between neurons, adds de Garis, "but this is trivial" next to the computer he plans to build. It would have billions of times more silicon "synapses," each far faster than the brain's.

All this and more will be possible-- once process engineers nail down methods for cranking out gigachips economically. Getting to the first gigabit generation in 2002 will require only evolutionary improvements in production equipment, but it will take some groundbreaking new technology to go much further. Around 2005, chipmakers will hit the limit of light's ability to print microscopic lines on silicon. To make smaller transistors, they'll have to perfect ways to do the job with X-rays or electron beams.

Neither approach will be cheap. That's why chip producers also will be going all-out to trim the costs of wafer-fabrication plants--the hermetically sealed factories where chips are etched in saucer-size silicon disks known as wafers. If a wafer fab's price hits $2 billion by 2000, as expected, chip prices may have to increase faster than performance--reversing the effects of Moore's Law. The semiconductor industry would do almost anything to avoid that (page 90).

In the meantime, researchers aren't waiting to dream up ways to use gigachip power. Computer scientists, for example, are pursuing completely new kinds of software. Already emerging are such glimmers as "evolutionary" programs and genetic algorithms that improve themselves with age and intelligent software agents that roam the electronic globe hunting down answers to questions both practical and profound--from "What's the cheapest fare I can get to Seattle?" to "What is a quark?"

Scientists believe these and other onetime fantasies will become reality over the next two decades, triggering upheavals in business and society. "People don't realize how fast these things are going to come," says Thomas N. Theis, manager of semiconductor physics research at IBM's Research Div.

If silicon becomes as pervasive and as smart as researchers envision, the workplace may undergo changes that could make today's downsizings seem trivial. "We're going to have a big impact on industry--essentially decentralizing it," predicts Mark C. Melliar-Smith, chief technical officer of AT&T Microelectronics, the telecom giant's chipmaking arm.

The reason: Networks of $1,500 computers with the powers of today's $15 million supercomputers will make it possible to do almost any white-collar task anywhere. The costly technologies now found only at headquarters, such as video-conferencing systems, will become standard features in all PCs. "We'll squeeze it onto one chip, and it'll become a consumer product," says Melliar-Smith. Then there will be no economic rationale to maintain large central offices. And that will change the nature of management.

Such transformations are going to sweep through other institutions as well. D. Raj Reddy, dean of Carnegie Mellon University's Computer Science School, worries about the education system's ability to prepare itself, let alone students, for the new world. "There's a fair chance that the university as we know it will cease to exist in 50 years," muses Reddy.

Restructuring will hit especially hard in the electronics industry, says Dean E. Eastman, a vice-president at IBM's Research Div. Apart from the Japanese giants, only a few makers of computers and other electronics gear--IBM, DEC, Hewlett-Packard, and Siemens--actually produce their own chips today. And it's almost certain that even fewer will do so in the future, given the staggering costs of semiconductor plants.

That could pose serious challenges for builders of electronics systems: Gigachips won't be just components in a product--they'll be a product searching for a box. Chips already represent as much as 75% of the hardware value in some computers, and that will rise. "With more and more value heading for the silicon," says Michael J. Attardo, the IBM senior vice-president who runs Big Blue's microelectronics business, everyone else is going to get squeezed. "Margins are going to get tighter," says Jan C. Silverman, advanced technologies marketing manager at Hewlett-Packard Co.

NOT JUST INTEL INSIDE. At the same time, gigachip technology could undo Intel's near-monopoly. Because a gigachip will have enough transistors to accommodate the guts of multiple microprocessors, it will be possible to create a high-performance chip that uses just a bit of its silicon to emulate the old Intel chips and, therefore, remain compatible with existing software while offering far better performance.

Developing such a chameleon chip for all the major non-Intel designs is the goal of the Open Microprocessor Systems Initiative (OMI), a $270 million European Commission project involving 40 teams from 100-odd companies and university labs. OMI's initial focus is a reduced instruction-set computing (RISC) chip able to run software designed for a half-dozen microprocessors, including the Motorola/IBM PowerPC, Sun Microsystems' Sparc design, Advanced RISC Machines' ARM chip, and the Transputer chip from SGS-Thomson's Inmos Div. But it wouldn't take much to empower chameleon chips to act as Intel microprocessors as well and to run standard PC software.

Intel, of course, isn't going to take this lying down. Recently, for example, it joined forces with HP to develop a hybrid chip that will lift the technologies of both Intel's X86 family and HP's RISC design to new levels of performance. And Intel's next-century chips could offer their own emulations of rival designs.

Ultimately, the market for top-end microprocessors might plateau--not because of the overwhelming powers of gigachips but because of something called "ubiquitous computing." It is a concept envisioned by Mark Weiser, manager of the computer science lab at Xerox Corp.'s Palo Alto Research Center (PARC). With information networks everywhere and devices such as dirt-cheap telecomputer tablets, conventional forms of PCs will be largely passe, Weiser believes. With every networked computer in the world on call, you could just scribble what you want on a digital pad, and the request will zip into the network and search out a suitable computer. Most of the overhead would be in the network, so the digipads scattered around the office and home could use past-generation silicon, making them as cheap as pocket calculators.

Indeed, in the gigachip era, today's megachips, including Pentium, will be commodities. That will make ubiquitous computing practical. For example, PARC is experimenting with "active badges," a plastic card with a built-in transmitter. At PARC, these constantly relay signals to sensors in the ceiling, so that Weiser's assistant can pinpoint his location on a floor plan in an emergency. In an on-line world, this concept could reach everywhere--including the home. AT&T Bell Laboratories already has a prototype of HumaNet, where a futuristic living room is studded with ceiling sensors and equipped with remote- controlled gadgets, including a TV wall. And everything responds to verbal commands.

That points up another major trend in gigachip computing: Making machines easy to use by endowing them with humanlike senses--fluid speech, a good ear, keen vision. IBM calls the concept "natural computing" and has set up a separate unit to tackle near-term applications, including speech recognition in the first models of its upcoming Power Personal Systems line of PCs.

ENORMOUS JOB. The idea is to end technological tyranny for good: Machines will no longer dictate how people must use them, but instead adapt to each user. If you don't like using a mouse, you could point to an icon on the screen to start your spreadsheet. Building computers that people can deal with almost as naturally as with another person is essential if the Information Superhighway is to be as publicly usable as the telephone system. It'll be an enormous programming job, says Larry Raviner, director of information principles research at Bell Labs. But when it's done, "the whole world is going to change."

The first barrier to fall may be speech-recognition. "We've been working on speech for 20 years," says Michael L. Dertouzos, director of Massachusetts Institute of Technology's computer science lab, "and it's finally bearing fruit." In 10 years, even handheld computers will understand ordinary speech--and 15 years hence, predicts AT&T's Melliar-Smith, a pocket computerphone will instantaneously translate conversations between people speaking different languages. Today, he adds, it takes a file-cabinet-size translation system to manage a few hundred words.

But those tricks of the software engineer's art won't be possible unless the semiconductor industry clears its own near-term hurdle: the end of optical lithography. Since Silicon Valley's earliest days, chips have been printed the same way as photos: by shining light through a negative, or mask. At first it was visible light, but lately it's been with more precise ultraviolet (UV) light. Last year, SVG Lithography Systems Inc., a division of Silicon Valley Group Inc. in San Jose, Calif., unveiled equipment that uses UV light to print 0.35-micron lines, which soon will be the industry standard for producing 64-megabit DRAMs.

The key to stretching optical lithography into the next century is "deep-UV" light. Emitted by an excimer, or pulsing, laser, deep-UV light could print lines as skinny as 0.15 micron. That's sufficient, says Hisashi Hara, general manager of an advanced-technology lab at Toshiba Corp., to produce 1-gigabit chips. It's doubtful that optical methods can go beyond this, although Shoichiro Yoshida, executive vice-president of Nikon Corp., the leading supplier of lithographic systems, hints that novel light-emitting materials might do the trick. "There are some possibilities," he says, "but I can't discuss them."

Still, nobody believes that optical technology will work beyond the 4-gigabit level, which is expected to be reached in 2005. Something else will be required to produce 0.1-micron features. That's so teensy that if such transistors were silicon pearls, you would need a string of 2,500 of them to circle a human hair.

The most promising candidate is electron beams. Theoretically, an E-beam system can produce lines as wispy as 0.02 microns wide, which would take chipmakers beyond 2025--to chips with trillions of transistors. The hangup is that E-beams write circuit lines with what amounts to a high-tech quill pen. Since a 256-megabit DRAM's circuitry will be equivalent to a street map of the entire world, "you can do only one wafer per hour" with an E-beam, says Masao Fukuma, senior manager of NEC Corp.'s Microelectronics Research Lab.

CATCHING HEAT. That's far short of today's output of 60 wafers an hour. Cornell University's National Nanofabrication Facility is working with IBM on a system that uses a grid of E-beam emitters to write the same pattern on multiple chips simultaneously. NNF Director Harold Craighead believes this process could etch 50 wafers an hour. "We're getting encouraging results," he says, "but we're many years from a prototype production tool."

Around 2011 comes the real make-or-break point, when transistors are expected to dive below 0.1 microns. These little critters would switch on and off so fast they'd generate enough heat to melt holes in the silicon unless the chips are cooled with a refrigeration system. So either computers will grow big again, which would boost costs and slash market potential, or chipmakers will have to switch to a totally new breed of microswitch--ultratiny quantum transistors.

These would exploit the weird world of quantum dynamics, where electrons become waves that defy the laws of standard physics, enabling one quantum transistor to do the work of several. "That's what's neat about quantum transistors," says Mark A. Reed, a professor of electrical engineering at Yale University. By swapping a quantum device for several ordinary transistors, circuit density--and heat--could be reduced enough to avoid the need for refrigeration.

TERA FIRMWARE. Fujitsu, Hitachi, and Texas Instruments are among the chipmakers making strides in quantum electronics. Late last year, TI unveiled an experimental circuit with 17 quantum transistors that do the work of 40 conventional devices. If it all works out, 16-trillion-bit (terabit) memory chips are conceivable by the mid-to-late 2020s. One such monster would hold 6,667 sets of the Encyclopaedia Britannica.

On the other hand, researchers admit, it's also conceivable that quantum chips won't ever be economical for mass-market products. In that event, the saga of the incredible shrinking transistor might finally end in the 2010s.

Even so, gigachips will by then have unleashed such an explosion of cheap computing power that the world will never be the same. In addition to artificial speech, artificial vision is coming. Today's approach is limited because it requires a computer to identify an image by comparing it against a library of stored images. But new research promises machine-vision systems as versatile as the eye-brain biosystem.

For Steven Shafer, director of Carnegie Mellon University's Calibrated Imaging Lab, the key is a deeper understanding of the physics of perception: How the brain analyzes texture, color, and reflectivity to extract meaning from all the visual data that assault our eyes. That's why his lab is crammed floor-to-ceiling with objects of different shapes, textures, and colors.

Shafer conceived the first physics-based theory for machine vision, called the Dichromatic Reflection Model, in 1984, and it's used in the self-steering vehicle that CMU built for the Pentagon. But Shafer figures he's still a decade from his ultimate goal: "A computer that can identify objects when it doesn't have any notion of what it's looking at."

Meanwhile, machines will begin getting perhaps the most valuable sense of all: common sense. The trouble with computers is that they take everything so literally. Send a note to the company librarian asking for data on the "plasitics" industry, and you'll no doubt get what you want. Put the same request to a computer database, and it probably will find nothing. To make computers less "brittle," a team led by Douglas B. Lenat, an artificial-intelligence pioneer at Microelectronics & Computer Technology Corp., the

Austin (Tex.)-based electronics industry consortium, has been stuffing a database with everyday knowledge--not the facts that fill encyclopedias, but the grounding you need to comprehend what an encyclopedia says. Lenat expects to launch the first commonsense software product next year.

When all these artificial senses get combined in gigachip systems, computers a decade from now may become as adept as people at interpreting ambiguous instructions and dealing with uncertainty. Augmenting these capabilities with software agents, evolutionary software, and other AI technology could make it possible to automate many white-collar jobs. Agents can learn a person's habits and routines, then carry out the tasks on their own. The first generation isn't all that intelligent. But with gigachip power, a future agent that you repeatedly dispatch to fetch on-line news about China might decide to explore the network and look for other sources of data that have escaped your notice.

Somewhere down the line, perhaps as early as the first decade of the 21st century, the number of transistors on a cluster of chips will reach a critical mass that would make so-called emergent behavior inevitable. That's the inherent tendency of any dynamic system, given sufficient complexity, to begin exhibiting self-organization and rudimentary intelligence. It might happen spontaneously, but probably it will be by design. A dozen research groups in the U.S. and Europe are intent on fashioning silicon brains more powerful than the human version. One team, the Brain Builder Group, was founded early last year by de Garis at the Human Information Processing Research Lab, part of Japan's Advanced Telecommunications Research Institute. In collaboration with Nippon Telegraph & Telephone Corp.'s nanoelectronics lab, he hopes to create a superior silicon brain "within 10 years."

For now, superhuman silicon brains remain science fiction. But the rest of these gee-whiz developments are pretty sure bets. Chips with the required power will almost certainly be produced over the next decade or so. So when you log onto a network to join a virtual work team in 2010, you may not be able to tell which co-workers are flesh-and-blood and which are silicon. Computers can't get any friendlier than that.Otis Port in New York, with Neil Gross in Tokyo, Robert Hof in San Francisco, and Gary McWilliams in Boston


Later, Baby
LIMITED-TIME OFFER SUBSCRIBE NOW

(enter your email)
(enter up to 5 email addresses, separated by commas)

Max 250 characters

 
blog comments powered by Disqus