Businessweek Archives

Where No Computer Has Gone Before


Special Report

WHERE NO COMPUTER HAS GONE BEFORE

Engineers at Boeing Co. have access to the fastest supercomputers in the world. Their $9 million Cray Research Inc. Y-MP, for example, can blast through 1.3 billion calculations per second. But is that enough power to satisfy them? Nooooo.

Sure, they can use the Cray to simulate air flowing over a future jumbo jet's wing. But why stop there? Why not model the airflow around an entire aircraft and calculate how the wings, tail, and fuselage might fare at supersonic speeds? Given enough processing power, Boeing engineers might eventually be able to tell a computer how big they want the plane to be, how far it must go, and what the fuel economy should be. In the end, the computer could design the whole thing itself. "We want to go where no man has gone before. It's the old Star Trek thing," says Kenneth W. Neves, manager of high-speed computer programs at Boeing. "But we need the starship."

TOPS IN FLOPS. Not to worry, it's on its way. It's called a teraflops computer, and it will crunch through a mind-boggling 1 trillion arithmetic operations per second -- more than 50 times more power than today's fastest machines can offer, for about the same price. It will do in one second what a person could do punching one calculation a second into a hand-held calculator, 24 hours a day, 365 days a year, for 31,709 years. Such a computer will allow researchers to tackle jobs that seemed impossible a few years ago. It could assess global climatic change 100 years into the future, for instance, or use quantum mechanics to squeeze more mileage out of cars.

The teraflops computer, though, will be historic not only for what it will do, but for how it will do it. Instead of ramming chunks of data one at a time through a single, central circuit, the way virtually all computers now operate, it will harness the power of hundreds, thousands, or even tens of thousands of powerful microprocessors -- something like jamming all the PCs in a large office building into one box, wiring them together with a miniature network, and programming them to cooperate on a single problem. Just as 12 hungry people can polish off a box of doughnuts faster than one voracious eater, gangs of microprocessors can complete many jobs in a fraction of the time of conventional large computers.

This revolutionary design, called massively parallel processing, or MPP, may eventually become the standard way to build large computers. For now, MPP is finding early successes in scientific work, where Cray and other supercomputers have ruled for almost 20 years. But with more sophisticated software in place, the machines may carve out a role in commercial data processing, too. This year, for instance, Teradata Corp. will sell about $300 million worth of specialized MPP gear for finding patterns in volumes of business data too cumbersome for the largest mainframes from IBM and Unisys Corp.

MPP technology actually marks the final phase of the microprocessor's ravishing sweep across the full spectrum of computer hardware. Evolving rapidly from its appearance in the early 1970s as a simple device good only for controlling traffic lights and such, the microprocessor by the mid-1980s was replacing entire minicomputers. Now, ganging up in large numbers, it's poised to fundamentally rewrite the economics of the $52 billion market for mainframes and supercomputers dominated by IBM and Cray Research. While companies such as Intel, Thinking Machines, and nCube aim for both companies' markets, AT&T's NCR is about to launch a specialized attack on the mainframe with a general-purpose version of Teradata's machine. Data-base software company Oracle Corp., meanwhile, is working on software for commercial MMP machines.

COOKIE-CUTTING. Nobody is predicting the immediate demise of mainframes or supers: They're running too much useful software to be discarded. But the economics of MPP hardware grow more compelling every day: Old-style machines, based on proprietary designs, can't touch the microprocessor's tremendous manufacturing economies of scale. The chips get stamped out like so many millions of cookies a year. What's more, using a technology called reduced instruction-set computing, or RISC, micros are within spitting distance of the raw number-crunching speed of conventional large processors. And it looks as if micros will continue doubling in speed every two to three years through the rest of the decade.

Still, despite years of research, programming lots of processors to cooperate on a single problem remains painfully difficult. Researchers so far have identified only a few, narrow classes of jobs that can be easily split into pieces and run faster on multiple processors than on a single processor. The trick is making the pieces as independent of each other as possible so they don't waste time passing data back and forth. Most MPP software, therefore, must be written from scratch, line by line, and by specialists -- a time-consuming, expensive proposition for companies that have billions of dollars invested in conventional software. MPP computers, says Michael Teter, an engineering fellow at Corning Inc., are "years away from being massively useful."

Even so, "the field has definitely evolved from 'what if' to when," says Jonathan P. Streeter, who tracks supercomputers at the Commerce Dept. Right now, the market is tiny: Smaby Group, a market researcher, estimates worldwide 1991 sales of MPP supercomputers at $270 million, a small fraction of the $2.2 billion spent on all supercomputers (chart). But with MPP revenues expected to grow by 40% a year through 1994, many companies in the U. S. and Europe are scrambling for a position in the market (table).

As small as it is, the fledgling MPP market is in turmoil. No single way of linking a bunch of microprocessors has emerged as the most technically compelling design. That means neither software companies nor customers are sure of which technology to bet on. New alliances, meanwhile, seem to pop up every week. While established computer companies investigate MPP themselves, they're also scrambling to hook up with aggressive startups. Digital Equipment Corp. has an investment stake in MasPar Computer Corp. in Sunnyvale, Calif. IBM recently formed a joint venture with Thinking Machines Corp. to link the latter's Connection Machine with IBM's market-leading mainframe. And Japanese companies are now working together in a government-sponsored Real World Computing Project to develop MPP and other technologies.

RICH UNCLES. The MPP race has seen some early dropouts, too, including Bolt Beranek & Newman, Floating Point Systems, Myrias, and Teraplex. Most haven't been able to raise the cash needed to survive in a market where the technology is changing rapidly, and near-term payoffs are so small. Ben Barker, president of BBN's advanced computer subsidiary, says developing a new MPP design can easily cost $50 million. BBN's effort, focused on a computer called the Butterfly Machine, lost about that much and failed to get outside financing. "The only people making it are people with a rich uncle," Barker says. The richest of those is Uncle Sam. The Pentagon's Defense Advanced Research Projects Agency (DARPA) has been pumping money into MPP research ever since the early 1980s, when the "Star Wars" Strategic Defense Initiative began. Since 1983, DARPA has invested more than $200 million in parallel computing. The SDI program has contributed tens of millions more. "High-performance computing will be essential to our defense in the 21st century, so we'd better make sure there's a technology base," says Stephen Squires, director of computing systems technology at DARPA.

The government's role in this nascent market is controversial. Because DARPA puts some of its MPP budget into direct company grants, some companies complain that the Pentagon is picking MPP winners and losers. Thinking Machines and Intel Supercomputers, a division of chipmaker Intel Corp., are the two main beneficiaries of Penatgon largess. They also have the two biggest shares of the scientific MPP market. "It has locked us out of a lot of sales," complains Stephen Colley, CEO of MPP maker nCube Inc.

Politics aside, MPP is changing radically how government and private industry tackle problems that range from the subatomic to the intergalactic. Take global climate modeling. Scientists regularly simulate the earth's climate patterns to forecast pollutants' effects on the ozone layer, global warming patterns, and acid rain levels. Today's simulations, though, are accurate only about a decade into the future, which isn't enough. The pollutants "may not cause much effect over 10 years, but they might have sizable effects over a century," says David W. Forslund, a physicist and deputy director of advanced computing at Los Alamos National Laboratory.

That's why Rick L. Stevens, a researcher at Argonne National Laboratory, is working on a 100-year climate simulation that will plot how ocean and air currents interact. "The problems we want to solve are levels of magnitude beyond what today's computers are capable of," Stevens says. Controlling pollution "is going to take a great degree of cooperation from a significant faction of people. These simulations are the only way to state with a great deal of confidence what the consequences of public policies would be," says Steven A. Walker, director of parallel processing applications at Cray Research.

'GETTING CHEAPER.' Eventually, the teraflops computer may also help capture energy's holy grail -- controlled nuclear fusion. Researchers want to spend billions building reactors to test their latest theories on the subject. A teraflops could help identify the best design beforehand through simulations. "Before we spent billions, we would have some confidence the thing actually would work," says Los Alamos' Forslund.

Teraflops speeds could also help scientists calculate the effects of toxins on humans. Supercomputers today can simulate molecular interactions, but only with molecules that are relatively simple. "Toxins like dioxin tend to be larger, more complicated molecules," says Argonne's Stevens.

MPP systems are helping corporations, too. Oil companies, traditionally big users of supercomputers, are moving to MPP to analyze seismic data. Says Charles C. Mosher, a research scientist at Arco: "Massively parallel computing is cheap and getting cheaper. Mainframes and traditional supercomputers are staying pretty much flat. I had to ask myself: Which one do I want to be on?" He now uses an Intel iPSC/860 to create three-dimensional images of geological features and oil reservoirs. An nCube computer is doing the same at Shell Oil Co.

The big payoff for MPP, though, may be reaped in the mundane world of data processing, where corporations use lots of mainframes to churn out payroll checks and the like. So far, managing large data bases seems to offer the best opportunity for the new computers. That's because, as companies install thousands of desktop and hand-held computers, they're accumulating far more data than their mainframes can sift through. But hidden patterns in all that data may lead to greater profits.

Discount merchandisers Kmart, Wal-Mart, and Mervyn's, along with AT&T and other telephone companies, have been using Teradata's MPP computer for several years. Harnessing hundreds of Intel 486 microprocessors, the Teradata machine can identify fleeting sales patterns in a matter of hours -- not the days it might take a traditional mainframe. The quick feedback on what colors and styles are selling best each day helps Mervyn's, for instance, order garments from Far East suppliers.

Such superfast data searches may eventually form the basis of "decision support centers" for executive suites. Business Research Institute is developing such a system that would let executives call up text, numbers, photos, maps, and even video clips -- all hooked up to an MPP data-base computer.

SOFTWARE MAZE. Ultimately, MPP machines could add great flexibility to how computers get built, too. Conceivably, machines ranging from desktop workstations to supercomputers could be built from multiples of a single processing element. That would mean much lower manufacturing costs and greater flexibility for customers, who could install just the right size of computer now and expand it in the future.

Most of this is talk for now, for two reasons: minimal sales and the daunting software problem. Most MPP machines so far are used in research, not for making money. Robert Schumacher of Carnegie Mellon University, for instance, is using a Connection Machine to study the physics of musical instruments.

Until they're easier to program, the machines' only commercial use will be in special situations -- where the potential payoff is enough to overcome the high cost of creating software. Two recent examples: American Express plans to use two Thinking Machines CM-5 computers to sift through credit-card records and identify the buying patterns of its individual customers. And Prudential Securities Inc. uses an Intel MPP machine to evaluate financial instruments.

In the meantime, intense effort is being focused on the software problem. So far, it has achieved only incremental gains. There's no sign of a grand breakthrough in the hideously complex problem of getting 100, let alone 10,000, different processors working together harmoniously. Only tedious trial and error gets most MPP programs working. But as Paul Messina, executive director of the Concurrent Supercomputing Consortium at the California Institute of Technology says: "The average user isn't that masochistic."

The best that can be done, says Abraham Peled, director of computer science research at IBM, is to create standards and better MPP programming tools -- programs designed to help software writers keep track of myriad details and identify bottlenecks. "People's intuition just isn't developed for this kind of programming," Peled says. MPP hardware makers, meanwhile, are spending heavily to get useful software written for their machines. MasPar, for one, spends 75% of its development costs on software.

Still, the problems with today's software aren't preventing computer designers from sketching out even more massively parallel machines. Japan's Real World Computing Project talks of creating a 1 million-processor machine that would boast a theoretical peak speed of 125 trillion operations per second -- enough to keep you and your calculator busy for, oh, about 4 million years.

THE MASSIVELY PARALLEL CROWD

LOCKSTEP MACHINES

Some parallel supercomputers have their many processors work in lockstep,

each doing exactly the same thing at the same time, but to different pieces

of data. Manufacturers: Active Memory Technology, MasPar Computer, Thinking

Machines, and Wavetracer

THE INDEPENDENTS

The other style of parallel design gets each processor to follow its own

program. That makes the machine more flexible, but much more difficult to

program. Suppliers: Alliant Computer, Intel, Meiko Scientific, nCube,

Parsytec, and Thinking Machines. Teradata supplies a highly specialized

system to the commercial data processing market

FUTURE PLAYERS

Coming entries in the MPP market: Supercomputer leader Cray Research, startup

Kendall Square Research, AT&T's NCR unit, and computer giant IBM

DATA: BWRussell Mitchell in San Francisco, with Gary McWilliams in Boston, John Carey in Washington, Neil Gross in Tokyo, and John W. Verity in New York


The Good Business Issue
LIMITED-TIME OFFER SUBSCRIBE NOW
 
blog comments powered by Disqus