Be Cool and Save Your Data Center

SAN FRANCISCO and NEW YORK — IT managers need new weapons to battle the beast of rising energy costs as their data centers become an increasingly sore spot of higher bills and space constraints.

Among the reasons for this negative turn is the growing cost of energy and the power required to cool the increasing density of computer hardware in the data center. Hewlett-Packard engineering research estimates that for every dollar of IT spent, a company can expect to spend the same or more to power and cool it. As companies add more performance, they can expect those costs to continue to rise.

The problem is being attacked by a wide range of interests. They include chip and server makers, engineers working on new fans and cooling mechanisms, and software developers in areas such as virtualization designed to increase server utilization.

“There is really no magic bullet. It’s something customers are attacking at multiple levels,” said John Humphreys, research manager, enterprise computing at IDC. Humphreys spoke at an HP-sponsored media briefing in San Francisco.

HP plans to roll out what it calls its next-generation adaptive infrastructure later this summer. The idea is to make power and cooling management part of HP’s unified OpenView management system, which gives IT managers a single console view of operations.

Part of HP’s roll out will include new energy efficient technologies. One example is an electric ducted fan inspired by those used in some remote-controlled airplanes.

The so-called “Active Cool” fan is designed to provide more efficient air flow and adjust to the changing needs of the datacenter (e.g. spin faster as more server blades are added). It’s also quieter than traditional fans. HP said it has some 20 patents pending on the technology.

The Active Cool fan is one of a portfolio of technologies in what HP said is its holistic approach to the data center.

Competitor Sun Microsystems isn’t impressed.

“Fan cooling technologies from HP and IBM are nothing but smoke and mirrors,” said Fadi Azhari, director of outbound marketing at Sun, in an e-mail sent to internetnews.com.

“They require customers to incur significant overhead in their data centers to accommodate what is essentially a band-aid approach to solving the very real and important problems of rising power/cooling/space costs in the datacenter.

“The root of their problems, unfortunately, lie in their chips.”

But Chandrakant Patel, distinguished technologist at HP Labs, said the processor is only a part of the energy challenge.

“When all you are looking at is the processor, it makes sense to talk about cores and the advantages of multi-threading like Sun likes to do. We’re looking at the big picture. I love it when Sun talks about this stuff because it brings the focus on energy efficiency, where we have a lot to offer.”

In addition to rising energy costs, another reason the issue of power maintenance has become such a concern is that data center design hasn’t kept up with Moore’s Law . While in the latter case, chip performance has roughly doubled every eighteen months, the data center hasn’t changed much for at least a decade.

At HP Labs in Palo Alto, Calif., the company has a project called the Enterprise 2010 where it tests new technologies and simulates real world issues and performance using its own servers.

HP said it’s been able to reduce cooling costs in its latest offerings by as much as 25 to 30 percent and, in a sign of things to come, as much as fifty percent in the Labs.

A major European-based bank recently approached HP about a major problem. (HP wouldn’t identify the bank). “They had 5,000 blades, broke every best practice in the book and had run out of space and thermal capacity,” said Paul Perez, vice president of storage, networks, and infrastructure industry standard systems at HP.

HP did an assessment and revamped the data center’s airflow and topology. The result, said Perez, was a 30 percent improvement in density. Heating costs were brought under control with liquid cooling of the server blades. The data center now has room for up to an additional 1,000 blades.

Server blades are one of the fastest growing segments of the computer industry, but the benefits can be mitigated by power management issues. IDC’s Humphreys said blade sales have doubled the past several years. “But there have been concerns over heat all the way up to one case I know of where there was melting,” he said.

Speaking of heat, Colette LaForce, vice president of marketing at Rackable Systems, gave out some “scary metrics” at an AMD-sponsored event in New York this week.

LaForce said that a traditional rack of blade servers gives off enough heat to power 150 light bulbs. “You could actually cook a turkey on that rack.

“What’s even scarier, a large Web business with 100,000 to 200,000 servers easily spends about $50 million a year on power just to run the servers. Tack on another $25 million for a/c to cool them and you’re talking about $75 million a year just to power the servers that power that business,” she said.

“I think we all agree there’s definitely a problem out there in the industry.”

IDC’s Humphreys pointed out that ten years ago large organizations typically relied on four to five mainframes to help run their organization. “Now 90 percent of the market is x86 servers and at about $3,000 each, they’re easy for companies to consume.

“But now they’re asking, ‘How can I avoid building new data centers?'”

Erin Joyce contributed to this article.

Get the Free Newsletter!

Subscribe to our newsletter.

Subscribe to Daily Tech Insider for top news, trends & analysis

News Around the Web