Technology

To Add Speed, Chipmakers Tune Structure


IBM, Intel, and AMD are finding ways around the physical problems that have hampered their efforts to make chips faster

To understand the quest to build ever faster and more powerful computers, it's helpful to understand the problems that hold them back from getting faster in the first place.

While chips themselves are getting faster all the time, faster is a relative term. Even though chipmakers like Intel (INTC) and IBM (IBM) are building more powerful chips every 12 to 18 months, other chips that go inside a computer haven't historically kept up in the performance race. If you think of a microprocessor as a fast-talking, fast-moving dynamo that never takes long to get anything done, then a dynamic random access memory (DRAM) chip is a bit more of a loafer, forcing the processor to wait for it before it can get on with the task at hand. Worse, between them lies a narrow hallway that tends to get crowded easily, and when it does, work piles up.

If it all sounds a little like a discarded plot for a Dilbert cartoon, you're not very far from the truth. The solution could come straight from the playbook of the pointy-headed manager, but with one key difference: It works brilliantly. The solution is to first make the slowpoke move a lot faster, but then to put them both in the same office so they can work more closely together with nothing to get in the way.

Combo Chips

That's a simple way to describe the disclosure of a new approach to computer chip design unveiled by IBM on Feb. 14 at the International Solid State Circuits Conference, a chip technology event in San Jose, Calif. IBM calls the approach eDRAM—the "e" stands for "embedded"—and says that combining the two types of chip onto a single piece of silicon will substantially improve processor performance. IBM plans to integrate this technique into its chips beginning in 2008.

By embedding DRAM directly onto the processor, IBM will be able to eliminate another type of memory that's usually embedded onto a processor called SRAM, or static random access memory, which is typically faster than DRAM, and acts as a go-between between the DRAM and the processor. However, SRAM takes up a lot of space on a processor, and with chips getting smaller all the time, the change frees up a lot of valuable space.

Don't expect this approach to chipmaking to show up on any mainstream servers or other computers, says Nathan Brookwood, head of market research firm Insight64 of Saratoga, Calif. He says that since most servers require about 2 gigabytes or more of memory—more than can easily be crammed onto a single chip—the design approach is likely to be used only with chips aimed at specialized applications that won't need much memory to begin with.

IBM, Brookwood says, has been talking about embedding DRAM directly onto processor chips for years. "What's new is the speed at which they say the memory is running," he says. "And while for selected applications that don't require a lot of memory this is marvelous, it's not a solution for people who are building mainstream general-purpose computers and servers."

Core Issues

But IBM's new method is indicative of how chip companies are finding ways around the physical problems that have held them back from making chips go faster. Another approach popular in recent years is to build chips with two or more cores—a core being the central brain of a chip—and a silicon-age proof of the old saying that "many hands make for light work" dividing up computing tasks between each core.

Dual-core chips are already in the mainstream, and chips with four cores are showing up at the high end in commercial servers as well. Earlier this week, Intel announced that it had built a chip with 80 cores that is capable of completing 1 trillion computations every second (see BusinessWeek.com, 2/12/06, "Intel Builds the Fastest Chip Ever"). For all its impressive performance, Brookwood says the chip is mostly a research vehicle that likely won't be applied to Intel's mainstream product line for several years.

But making chips faster is only half the job. One of the most fundamental problems facing chipmakers and computer manufacturers these days is making each successive generation of chips work faster while consuming the same amount of power or less than the previous one. Consuming power generates heat, which requires cooling, which requires even more power. "When you think of all the power that goes into a modern data center, half of it goes for air-conditioning," Brookwood says.

AMD's Power Switch

Getting control of that power consumption is the aim of a new technology that Advanced Micro Devices (AMD) announced on Feb. 12. The company says it has developed the means to individually throttle up and down the power demand on individual cores of a chip.

AMD says the technology will allow a four-core server chip to replace a two-core server chip without requiring any additional cooling or power. The secret lies in being able to shut down tiny portions of the chip entirely for those very brief periods of time—a few nanoseconds here, and a few nanoseconds there—when they're not in use. Multiply that by hours, days, weeks, and months, and you end up getting a lot of extra computing horsepower, without the need for more power. It's not unlike being able to double the number of cylinders in a car's engine, but without significantly lowering its gas mileage. The technology will first appear on AMD chips in the middle of this year.

Hesseldahl is a reporter for BusinessWeek.com.

Toyota's Hydrogen Man
LIMITED-TIME OFFER SUBSCRIBE NOW

Sponsored Links

Buy a link now!

 
blog comments powered by Disqus