Businessweek Archives

Introduction: Into The Wild Frontier


Cover Story: Information Technology Annual Report

INTRODUCTION: INTO THE WILD FRONTIER

Back in 1979, if you had visited Xerox's Palo Alto Research Center, you would have literally seen the future of computing. The precursor of Windows. The computer mouse. The laser printer. And a local-area network, newly dubbed Ethernet. If you had then broadened the tour, scientists at AT&T Bell Labs and IBM could have given you a spiffy prediction about digital phone switches and computer memory, anticipating developments that arrived just this year.

Ah, but you didn't make those visits, and now the opportunity of a lifetime has passed. Or has it? The world has changed a lot since then. Computer skills are much more diffuse. Exotic new technologies abound. But one thing remains true: In a handful of the world's outstanding computer labs, the shape of tomorrow is as plain to see as it was 20 years ago. What those labs are doing now will set the agenda for computing over the next 15 to 20 years.

During the past two months, the BUSINESS WEEK technology staff canvassed computer-science leaders to identify the top labs. And a team of a dozen reporters went to the labs to see what the future holds. These are their dispatches from the digital frontier.

The heads of America's hottest computer companies fight over browsers, operating systems, and "network PCs." But if you wander through the world's top-ranked computer-science labs--the ones at Carnegie Mellon, Stanford, MIT, and the like--creative minds don't dwell much on who rules the "desktop," with its tired metaphor of windows, folders, fonts, and trash cans.

What do the world's leading computer scientists focus on? Strange pursuits. John H. Holland at the University of Michigan ponders how ants manage their colonies when there is no central, organizing authority. David Gelernter at Yale University tinkers with "lifestreams"--rushing rivers of data that can store an individual's total creative outpourings. At the University of Washington's Human Interface Technology Laboratory (HITLab), Thomas A. Furness devises tools to let people illustrate points in a conversation by materializing 3-D images in space.

These startling activities are by no means random. In 15 years, or maybe 20, such projects will bear concrete results--some of them as predictable now as personal computers were to visionaries who visited Xerox Corp.'s Palo Alto Research Center in the late 1970s. Peering into the future, researchers don't obsess over compressing bits or cramming things onto disks. Their projects presume a world of nearly infinite digital storage space, processing power, and transmission capacity.

Many of these scientists believe computers of the future will be patterned, at least in part, on living creatures. These machines will draw much closer to humans than computers based on the old desktop metaphor. They'll respond to our voices and extend our senses. They'll simulate complex phenomena--weather patterns, stock market crashes, environmental hazards--solving problems and predicting outcomes at a price anyone can afford. Computers, if they even retain that moniker, will tend our children, meld with our flesh and blood, heal the sick, and restore eyesight to the blind.

Computers--or networks of them--will become ubiquitous, most digital pioneers agree, as they are invisibly embedded in other things. These machines will reconfigure themselves when new applications are required. And they may even communicate among themselves in languages for which no human ever wrote code.

Indeed, among some scientists, a whole new metaphor for computing is taking shape, patterned on the natural resilience and elegance of biological organisms. Electrical engineers and software wizards extol the astonishing ease with which living systems process staggering volumes of sensory data.

On this model, researchers say, machines of the future will learn to diagnose, repair, and even replicate themselves. There isn't much choice. Computing devices and the networks that link them will be far too complex for lumbering humans to monitor or manage with precision. "The biological metaphor is rich, rich, rich," says Michigan's Holland, pioneer of evolution-based mathematical formulas known as genetic algorithms.

"EXPLOSIVE GROWTH." Why do we know all this is coming? Two reasons. First, because we can bank on the same forces that unleashed the PC revolution 25 years ago: shrinking silicon circuits and faster communications infrastructure. "The pace of change is actually accelerating now," says Richard Howard, director of wireless research at Lucent Bell Laboratories. In the next two decades, "we'll see explosive growth of communications, computing, memory, wireless, and broadband technology."

The other reason we can divine future trends is that the seeds are already widely sown--both in the market and in dozens of computer research labs. Invisible computers? Chipmakers ship about 3.5 billion of those every year in the form of embedded or "real-time" processors. That's nearly 50 times the number of microprocessors sold in boxes with keyboards and monitors. An economy-class car has a dozen hidden microprocessors controlling the engine, brakes, and other systems. A Mercedes has about 60. Reconfiguring machines? That's what field-programmable gate arrays are. They're on the market now--though engineers have barely begun to tap their power.

Even computers that become part of our bodies are not so far-fetched. According to Peter Cochrane, head of research at British Telecommunications PLC, surgeons have performed about 17,000 cochlear implants on patients with hearing loss. "These people are already walking around with chips in their heads," he says.

All of this is on the street right now. But it pales beside what's in the labs. Driven to create a display technology more "organic" than a computer monitor, Furness of the University of Washington's HITLab has developed a so-called retinal display. It "paints" pictures directly on the eye by modulating a stream of photons from light-emitting diodes and scanning them across the retina. The mind perceives these scans as vibrant color pictures.

HITLab's goal was a better mode of visualizing data. Then, serendipity entered the picture. "We've found that people who have inoperable cataracts can see with this machine," says Furness, "because we're bypassing the optics of the eye and going directly to the retina." These findings are preliminary, and the equipment takes up a lab bench. But Furness expects to see commercial systems in about three years. Portable or implantable versions in high-definition "are definitely within a 15-year time frame," Furness says.

MINOR MIRACLE. Breakthroughs of this sort are all rooted in what Howard of Bell Labs calls third-stage innovations. The term refers to complex technology that's deftly disguised in simple applications. Today, when the grocery store scans your credit card, a minor miracle occurs: The clerk gets confirmation in a few seconds. How is that possible? "Because it doesn't go through the phone system," says Howard. "You've thrown a few dozen bytes of packet data into the air. It gets picked up by the router at the bank, and goes instantly into the bank computer. It's invisible and pervasive and very, very complicated--but to you, it looks simple."

Auto-navigation guides require equally advanced and invisible technology. For $150, you can install a global-positioning satellite system in your car that measures precise time signals received from three satellites, so a processor can pinpoint your location on a digital map stored on a CD-ROM. Soon, the electronics for such systems will be on a single, embedded chip that will cost just a few bucks. Then, your cellular phone or personal digital assistant "will know where it is at all times and be able to query the Net for local weather reports and restaurant guides," says Internet pioneer Vinton G. Cerf, MCI Communications Corp.'s senior vice-president for Internet architecture and engineering.

This process--the commoditization of ultrahigh technology--opens spellbinding opportunities for new consumer-electronics products. British Telecom thinks future generations of portable phones could be installed right in your ear. While talking, the user could also glimpse images or data that are pulled invisibly off the Internet and projected onto a magnifying mirror positioned beside one eye. IBM has demonstrated a "personal area network" (PAN) that lets two businessmen exchange a calling card's worth of personal information simply by shaking hands. Both must carry card-size transmitters and receivers. Their handshake then completes an electric circuit, and each person's data is transferred to the other's laptop computer.

As high technology, inexorably, becomes a game of miniaturization, mathematics, and silicon expertise, companies we do not now recognize as computer players can gain leverage. Philips of the Netherlands, for example, is developing its own, PAN-like products aimed at consumers. One example: a "hot badge" in which users input personal likes, dislikes, or their philosophies of life. The badges exchange signals with one another on the street, at a party, or in a bar. When two people with similar tastes meet, both badges light up. "Not everything that's possible is desirable," concedes Stefano Marzano, senior director of Philips Design in Eindhoven, Netherlands. But in a world where nearly any type of gadget is possible, "our technical advantage is our ability to imagine."

These concepts appeal because they are playful. But the more urgent issue, most scientists agree, is making such devices more natural to use. As a "user interface," 2-D computer screens get a hearty thumbs-down from many digital visionaries. And window-based menus don't help much. Trying to be all things to all people, pull-down menus are stuffed with so much information and so many options that your vision is constantly fixed on narrow details. Xerox PARC Director John Seely Brown compares the feeling to "walking around with toilet-paper tubes taped to your eyes." Without a peripheral view of your data, you get blindsided when new information turns up suddenly.

PARC has devised a different approach, which it calls "calm computing." Its engineers are developing new interfaces that engage a user's peripheral vision and make good use of psychology. For example, there's a "hyperbolic browser" that causes database trees to come alive on your screen. No more mazes of icons. Using it is like seeing into your computer's mind through a fish-eye lens. Navigate to the right with your mouse, and information on the periphery moves to front and center.

Even this approach, however, doesn't take full advantage of a human's cognitive machinery. John Gage, chief science officer at Sun Microsystems Inc., believes computer interfaces will eventually draw on multiple sensual inputs to convey much more information than a 2-D screen. To execute fast trades in a busy market, for example, Wall Street hotshots may one day sit at a computer that emulates an automobile driver's seat.

There, the continual flux of data on corporate results, stock prices, interest rates and other macroeconomic information could be represented in multisensual data streams. In the real world, as you drive, you feel the car bounce, your body shifts, you accelerate into a turn, there's a blast of wind, the sound of the engine--a hundred things you monitor, almost passively, says Gage. Each of these sensations can stand for something. In contrast, he says, with today's computers, "all you have is your eyes."

At MIT's Media Lab, Hiroshi Ishii is already developing a multisensual computing environment, which he calls an ambient room. It uses shadows flickering on a wall or background noise such as raindrops falling in a pool to alert the occupant about some relevant activity without intruding. So a stockbroker might track the volume of shares being traded on NASDAQ through the sound of rainfall, which would grow louder and faster as trading volume increases. The trading in a particular stock might be linked to a shadow pattern on the wall, so the broker could tell at a glance how active it is.

DIGITAL PETS. To make these interfaces come alive, engineers will depend on a new generation of intelligent "agents." These are bits of software that can carry out your bidding by scanning your E-mail for important messages, say, or roaming the Net and alerting you when there is news on a topic you've asked them to monitor. Crude versions of these programs have been on the market for several years. The next generation, far more powerful, will help customize how you interact with your computer.

Researchers at Fujitsu Ltd., Japan's largest computer company, have spent more than a decade trying to make agents more lifelike. Collaborating with Carnegie Mellon University, they have developed 3-D animated characters that "live" and evolve in a computer's memory. In March, Fujitsu released its first commercial product: a CD-ROM-based character called Fin Fin that resembles a dolphin and exhibits qualities of a household pet. Over time, Fin Fin learns the sound of its owner's voice, transmitted through a microphone. It comes when you call and accepts digital "food."

The technology behind Fujitsu's digital pet is called artificial life, or a-life--a branch of computer science that dates to the early 1950s. It began with simple programs called cellular automata, which created interesting patterns on computer screens by merging, mutating, and evolving. Later, scientists such as the University of Michigan's Holland realized that they could channel this evolutionary process to create models of events in the real world and to solve mathematical problems.

Holland's semi-random evolutionary programs, called genetic algorithms, are used in industry to improve scheduling and logistics in manufacturing facilities, among other things. And new biologically inspired approaches to computing now thrive in laboratories around the world. The laboratory of Daniel Mange, head of the Logic Systems Laboratory at the Swiss Federal Institute of Technology in Lausanne, is developing an electronic watch that repairs itself. The rudimentary prototype, called a biowatch, does this by shifting tasks among six redundant silicon cells, each of which contains the full programming of the watch--just as each animal cell contains the full genome. Over the next 10 years, Mange hopes to perfect integrated circuits that both self-repair and self-replicate. "We are a kind of god of nature," he laughs.

Fujitsu's notion, exemplified in Fin Fin, combines aspects of a-life and intelligent agents to create gentle, fanciful, or cuddly user interfaces for computers. The agent element would execute the user's instructions; a-life would allow it to evolve and adapt to the user's tastes.

Such a combination could be an ideal way to maintain a friendly face on powerful machines that are growing increasingly complex. And agents such as Fin Fin will be even more compelling when such creatures move from 2-D screens into virtual reality.

This evolution may not sound appealing, given the uncomfortable headgear and grainy displays that mar many of today's VR systems. But as more processing power and memory get squeezed into smaller devices, the technology will become more inviting. In education, VR tools will let children learn things through experience that were once accessible only through books or films. To teach children the relationship between planets in the solar system, for example, the University of Washington's HITLab envisions a simulation in which the user grows to a height of a thousand miles and strides through the solar system at the speed of light.

VIRTUAL TOUCH. If you think of this as a game, researchers won't take offense. It's the game industry, after all, that created the economies of scale needed to make high-end graphical computing available to everyone. Millions of children clutching Sony Corp. PlayStations and Nintendo 64 consoles now know more about simulated worlds than most of their parents. The next hot VR technology to be commoditized by games is so-called force feedback, or virtual touch. Manufacturers are releasing steering wheels and joysticks for PC games that resist when you take a sharp turn or fire a missile. "Adult" applications are the next logical outgrowth--as soon as sensors and processing power are cheap enough. Sex, after all, was an important driver of earlier technologies, such as VCRs, CD-ROMs, and electronic commerce.

In 15 years, VR simulations will be woven deeply into education and job training, according to Anita K. Jones, professor of computer science at the University of Virginia. Jones should know. She spent most of the past four years as director of Defense Research & Engineering at the Defense Dept., where she oversaw all science and technology development projects. The military invented high-fidelity simulations for flight training. And it still bankrolls the most cutting-edge applications. To train soldiers in decision-making, for example, the Army has been known to square entire tank batallions off with simulated forces on virtual battlefields. The behavior of simulated enemy tanks is so realistic that soldiers can't distinguish them from human-

driven foes.

Exercises of this sort require staggering processing power--beyond the reach of academic labs. But with price reductions pushed by games, high-end simulations will soon be widely available. "The boundary between training and doing will disappear," Jones predicts.

While high-fidelity simulation changes how we view the world, tiny devices called micro-electro-mechanical systems, or MEMS, will transform how the world responds to us. The best example of MEMS, which combine actuators with sensors and computation, is the device in a car that detects abrupt deceleration, recognizes it as a crash, and inflates an air bag. Scientists at Xerox PARC believe MEMS could be the key to self-repairing machines. "If you place a heavy weight on a steel beam, it will eventually fail by buckling or bending," says Mark Weiser, PARC's chief technologist. "But if you could cover the beam with microsensors and actuators that notice the bending begin, then push back very slightly before it gets very far, the beam can support 10 times more weight."

MANY TECHIES. Trends such as this in electronic components are easy enough to predict. Abrupt innovations are harder to anticipate--and are playing an ever greater role in technological evolution. Why? Frontline computer expertise is now widely diffused. Asked to rank the top labs in an informal BUSINESS WEEK poll, a broad group of computer scientists predictably saluted Carnegie Mellon, Stanford, and MIT. But lesser-known programs at the University of North Carolina and Michigan also got high marks, as did the University of Edinburgh and British Telecom's computer-science lab. "In the last 20 years, the number of computer-science PhDs has risen from about 100 to 1,000 per year," Says Randy H. Katz, chairman of the University of California at Berkeley's Computer Science Div.: "They couldn't all get jobs at the six leading labs, so they went to work in a wide range of universities, industrial labs, and small companies."

This diffusion of computer expertise will speed the quest for new metaphors in computing. So will fast communications and the freewheeling scientific anarchy of the World Wide Web. Online forums on the Internet, for example, take credit for the rapid standardization of the virtual-reality modeling language, or VRML, used to create 3-D worlds on the Net. Scientists now have vastly expanded avenues for collaboration. And while they disagree about how to achieve new models for computing, few would question the need for change. Throughout the industry, there is palpable impatience with systems and networks that crawl--and crash. "As a heavy user of computers, I hate them," says Yale's Gelernter. "As a computer-science person, I'm outraged by the quality of the software."

Even Microsoft Corp.'s top science maven, Nathan P. Myhrvold, seems weary of the the PC's limitations. "The rudest thing about computers is that they only do exactly what we tell them to do," he says. "If people were as rude--that precise and specific--we wouldn't marry or keep jobs."

Microsoft is doing everything in its power to change that. The company is frantically beefing up its Microsoft Research staff, which will triple in size, to 600, by the year 2000, says Myhrvold. Even competitors tip their hats to William H. Gates III's recruiting strategy, which netted Gordon Bell--father of the minicomputer at Digital Equipment Corp.--and a dozen other household names in the industry.

Microsoft won't voluntarily pull the plug on its Windows franchise. At the same time, it can't buck the trend toward intuitive computing that has taken such firm root in the world's top labs. If a breakthrough--or sheer increases in processing power--lead to superior interfaces, Microsoft will be forced to follow the market.

Will computers ever behave more like living creatures? Can they aspire to even primitive consciousness? Michigan's Holland says it can't be ruled out. With the most complicated computers we have today, he says, "the typical element [on a chip] will contact up to 10 other elements." In the human central nervous system, the equivalent element contacts about 10,000. The difference is three orders of magnitude--a number that has significance in academic circles. "There's a kind of aphorism: Any time you jump three orders of magnitude, you have a whole new science," says Holland. As we move toward billions of elements on a chip, the boundaries that separate us from machines will continue to blurr. "It's beyond our realm to say that machines of such enormous complexity can't be conscious," says Holland.

As these machines move more seamlessly into our lives, our bodies, and our thoughts, the very notion of a "computer"--with its manuals, glitches, and crashes--may recede to a memory.BY NEIL GROSS With Julia Flynn in Edinburgh, Otis Port in Redmond, Wash., and bureau reportsReturn to top


Best LBO Ever
LIMITED-TIME OFFER SUBSCRIBE NOW
 
blog comments powered by Disqus