Solving the Superspeed Dilemma


The future of supercomputers is clear: more speed -- especially more so-called sustained speed when chewing through actual problems in science and engineering. The difference between peak speed and sustained speed can be dramatic. While running certain science and engineering applications, many of today's top supercomputers rarely manage to sustain even 10% of their theoretical top speed.

Supercomputer users and suppliers have been bickering over system design, or architecture, ever since clusters of small computers with ordinary microprocessors began elbowing their way into the super market a decade ago. Dozens or hundreds of ganged-together brain chips from Advanced Micro Devices (AMD), IBM (IBM) , or Intel (INTC) can zip lickety-split through tough business-computing chores. And because business demand for faster number-crunchers seems insatiable, clusters now dominate the Top 500 list of supers.

BOGGED DOWN. For tackling software written by researchers, though, commodity microprocessors can be snails compared to the special chips that have long been the guts of supers built by Cray (CRAY) and NEC (NIPNY). Computers built with these chips, known as vector processors, regularly post sustained speeds of 40% to 60% of peak speed, and sometimes up to 90% (see BW, 6/7/04, "Supercomputing").

So a vector system with the same peak speed as a cluster-based system can solve scientific problems four to five times faster. On the other hand, if scientific software contains any of the so-called scalar code typically found in business programs, it will severely bog down even the most powerful vector system.

One way out of this vector/scalar dilemma seems obvious: Build hybrid computers that combine both types of processing. Some experts assert that hybrid, or heterogeneous, architectures are the only way to reach the next milestone in supercomputers -- systems that can clobber problems with quadrillions of calculations per second. The geek term for such incredible speed is petaflops (peta for quadrillion, flops for floating-point operations per second).

POTENTIAL PAYOFFS. Several companies aim to unveil petaflops gear by decade's end, perhaps as soon as 2008. Next year the three remaining contenders for Defense Dept. funding in the High-Productivity Computing Systems competition -- Cray, IBM, and Sun Microsystems (SUNW) -- will submit their final proposals for reaching the petaflops mark to the Defense Advanced Research Projects Agency (DARPA). In 2007 the National Science Foundation may solicit more ideas for ultrafast computers, which could give a lift to previous HPCS hopefuls Hewlett-Packard (HPQ) and Silicon Graphics (SGID).

The potential payoffs from petascale computing -- from discoveries in fundamental physics to new drugs and better product designs -- aren't lost on other countries. Government agencies in Japan, France, and China currently have, or are organizing, similar peta efforts.

They could tap more than a half-dozen computermakers. Japan's NEC and Fujitsu, France's Bull, and China's Lenovo have already tossed their hats into the ring (see BW, 4/25/05, "A PlayBook for Taking on Big Blue").

Still, look for IBM to increase its stranglehold on BusinessWeek's Top 25 supercomputer list. Various configurations of its BlueGene architecture now hold the first two spots, plus eight more Top 25 rankings -- and a total of 19 of the leading 100 supercomputers on the Top 500 (see BW, 6/14/05, "A Lightning Bolt From the Blue"). IBM's two BlueGene crowns have peak speeds of 367 teraflops and 114.7. And the bar keeps getting higher.


Tim Cook's Reboot
LIMITED-TIME OFFER SUBSCRIBE NOW
 
blog comments powered by Disqus