Clean Your Datacenter, Cut Your Electric Bill?
Page 1 of 2
The Uptime Institute is preparing to release a series of case studies that show many datacenters are not well managed and could become much more efficient without having to spend a dime on new gear. The research was done in 2008.
The common sales pitch for achieving datacenter efficiency is to toss out all of the old and replace with shiny new blinking black boxes. That might suit the likes of IBM, HP and Dell just fine, but Kenneth Brill, executive director of the Uptime Institute, an organization dedicated to improving datacenter power efficiency, said that's not what's needed.
"This is not a technology problem it's an economic and leadership problem," he told InternetNews.com. "The economics of IT has fundamentally changed. Those companies that don't get it will start to see an economic impact."
Currently, datacenter facilities average around eight percent of total IT expenses, with some as high as 15 percent, and that will go to 20 percent in the coming years. One reason for that is datacenters have moved off-site. In the company of yesteryear, the "datacenter" was one room in the corporate offices with an IBM mainframe on a raised floor.
Today, companies have their rows upon rows of x86 servers, switches, backup systems and storage housed in separate facilities, and are putting so much into brick and mortar facilities that profitability is being affected, said Brill.
Under one roof
The first problem is that these facilities are not managed by IT, they are often managed by another part of the company that handles facilities. "IT never sees their own energy bill," said Brill. "The CIO needs to be more economically-aware of these things."
Facilities and IT expenses are siloed and kept separate from each other in most organizations, which Brill says has to change. "Companies need to move facilities and IT under a common organization so IT has a motivation to fix the brick and mortar part of the equation," said Brill. Facilities used to be a trivial expense, but as datacenters grew to the size of football fields, it's not trivial any more, he added.
Dean Nelson, senior director of global lab & datacenter design services for Sun Microsystems (NASDAQ: JAVA), mentioned this very issue recently in a discussion of Sun's new high efficiency datacenter in Colorado.
"I would say if you don't have someone in my job, you need one, because the job is to translate between the facility and it groups, and if you don't they become opposing forces, usually, because they have different agendas," he said.
The facilities side wants to keep things small and they don't want to build if at all possible, while the IT side is all about performance, so they are asking for density, which facilities doesn't want to pay for.
"I moved out of IT and engineering for the company and into real estate. It was a really interesting transition but I'm glad we did this because it gave me a completely different perspective as to what they need on the real estate side. Also, we brought the competence around IT into real estate so we can make more informed decisions," said Nelson.
It makes a difference. Nelson said that after Sun's restructuring in Colorado, the power usage efficiency (PUE) rating of the datacenter dropped from 4 to 1.28. PUE is a measure of how much power goes to cool the datacenter vs. running it. The first digit is the operating power, and everything above 1.00 is power for cooling.
Uptime estimates the typical datacenter to have a PUE of 2.5, which means for every watt of power used to power a datacenter, 1.5 watts is needed to cool it. Sun's old PUE of 4 meant 1 watt to power a server, three watts to cool it. A 1.28 means 1 watt to power a server, 0.28 watts to cool it.
Google (NASDAQ: GOOG), which builds its own datacenters and virtually everything inside them, recently posted a blog update showing its datacenters have hit a remarkable PUE of 1.19 for the last 12 months on average, and 1.16 in the fourth quarter of 2008, its lowest PUE to date.
Next page: Clean house