With the release of its new Barcelona server chips, AMD is pushing a new way of measuring performance that focuses on power needs
Tommy Minyard is a power planner—literally. As the assistant director at Texas Advanced Computing Center in Austin, Tex., he's helping to assemble what will be one of the most powerful supercomputers in the world. The behemoth will include 3,936 servers, each capable of running its own network of machines.
One of the biggest challenges facing planners like Minyard is figuring out how much power these data centers need (when fully operational, the center will consume about two megawatts) and how much energy and spending is required to keep them from overheating. Factor in too little power consumption and cooling needs and a company may end up keeping servers idle, reducing the amount of computing that can be done. Factor in too much and a company risks overspending on such resources as air conditioning.
A 2005 survey by Strategy Group, an independent research firm, found that 70% of organizations with data centers were concerned about power supply and cooling, and 85% of companies surveyed have had to deal with problems related to ample power supply or cooling. They, in turn, have been demanding more power-efficient products (BusinessWeek, 5/14/07), not only from chipmakers like Advanced Micro Devices (AMD) and Intel (INTC), but also computer companies like Dell (DELL), Hewlett-Packard (HPQ), IBM (IBM), and Sun Microsystems (JAVA) that make the servers. "Power consumption is so important that it's driving chip designs and computer designs now," says Jim McGregor, an analyst at In-Stat/MDR.
A New Way to Measure
To help Minyard and other engineers plan better, AMD, the maker of the chips being used by Texas Advanced Computing Center, is encouraging the adoption of a new method for measuring computer power consumption. With the release of its new line of server chips, code-named Barcelona, AMD is emphasizing the average power use of each chip rather than assuming that all chips will be running flat out all the time. The new metric, referred to as Average CPU Power, is designed to help customers get a more realistic idea of how much power and cooling they'll need in real-world situations. "Over-budgeting can be a big problem," says Brent Kerby, a product manager at AMD. "If you build more cooling than you really need, you're stuck with it. But if you don't need it, why spend the money?"
That's a switch from the traditional method, known as Thermal Design Power, or TDP, which measures the outer theoretical limits of the amount of power a chip can draw, sort of like the top speed of a car. Just because a car can go 180 miles per hour doesn't means it's always going to be driven at that speed. The same goes for chips in servers, AMD argues. Why build a data center assuming that it will always be running at maximum load when you can save money by assuming otherwise?
Minyard says testing AMD chips helped him fine-tune his planning assumptions. "We did shave a little off being able to get more computing power within the budget," Minyard says. "Instead of spending it on cooling and power, we were able to spend a little more on computing power." Even small savings add up when building data centers—huge concentrations of servers operated by such institutions as universities and Internet companies like Google (GOOG) and Yahoo! (YHOO). In a report to Congress published Aug. 7, the U.S. Environmental Protection Agency found that data centers consumed 61 billion kilowatt hours in 2006—enough power to run 5.8 million households, and more than the power consumed by every color TV set in the country. The cost for all that juice: $4.5 billion. Texas Advanced Computing Center is being built with $30 million from the National Science Foundation.
Faster Isn't Always Better
The question of how to best measure a chip has long been subject to revision. For years computer makers reckoned the faster, the better. Hence the megahertz race that in the late 1990s caused Intel and AMD to battle back and forth for bragging rights to the fastest PC microprocessor. In more recent years it has become clear to chip-design engineers that simply speeding up the chips after shrinking the size of transistors on those chips stopped yielding the expected benefits.
Enter the age of dual-core—and more recently, quad-core—microprocessors, where each chip is comprised of more than one core, the central brain on a chip. Two-core chips are now common in personal computers, and four-core chips have been on the market for high-end servers for several months, first from Intel and now from AMD, with the Sept. 10 release of Barcelona.
But the fundamental changes in the way chips are designed have also changed the metrics used to judge one against another. Speed is misleading because chips with two or four cores by definition run slightly slower in terms of absolute speed but get their computing tasks done faster than a single-core chip because they're more efficient, breaking the work up into smaller, more manageable pieces as needed. So among the many other metrics that have emerged in recent years, the amount of power a chip consumes has taken on greater significance.
Tougher to Compare
What difference will using average power make? McGregor says adding yet another statistical metric, while helpful in theory, will also add to the confusion in the marketplace over which measurements are important to consider when buying a server. "It's a good idea, but many people simple aren't going to understand it," he says. "And it's even harder to compare systems running AMD chips vs. Intel chips because their designs are so vastly different."
AMD hopes its new emphasis will result in higher sales. After seeing its share of the server chip market soar (BusinessWeek, 5/3/06) from 3% in the second quarter of 2003 to 26% in the second quarter of 2006, AMD has been suffering under Intel's counterattack. AMD's share plunged back to 13% in the second quarter of 2007, according to market research firm IDC. Intel's advance may have continued in the current quarter. Stealing some of AMD's thunder, Intel said on Sept. 10 that revenue this quarter will be higher than previously expected. It said it will book from $9.4 billion to $9.8 billion in sales, up from a range of $9 billion to $9.6 billion forecast earlier this year. The company also said it expects to earn gross profit margins of more than 52%.