Utility Computing Shines in Blackout

The recent blackout that left about 50 million people in the U.S. and Canada without power pushed disaster recovery and business continuity back to the forefront to a degree not seen since the events of Sept. 11, 2001.

In the process, the outage helped shine a spotlight on the increasing role utility computing is playing in those fields.

“Along with its impact across myriad consumer and business activities, the blackout provided a real world pop quiz on the resilience of the Internet and the effectiveness of enterprise disaster recovery plans,” said Charles King, analyst with research and consulting firm Sageza Group.

“Keynote Systems, which monitors Internet performance, said that the major Internet backbones in the 25 largest U.S. metropolitan areas functioned normally following the event. SunGard Availability Services, which delivers disaster
recovery support for enterprises, announced that it had received disaster
declarations from 62 companies, and that an additional 100 had put the
company on alert. This is the largest number of declarations supported by
SunGard since the World Trade Center attacks, when the company received 77

King noted that in the wake of the outage, IBM declared a “BizCon red” emergency at its disaster recovery and business continuity center in Sterling Forest, NY, and activated its call center in Boulder, Colo., to handle the overflow traffic.

Meanwhile, Commerzbank’s primary data
center and two disaster recovery sites were all affected by the blackout, but managed to rely on generators to continue replicating data on its EMC Symmetrix systems.

Canadian-based Telus used generators to power
Hewlett-Packard and Sun systems in its two Toronto data centers.

“Overall, we believe that enterprises serious about [business continuity]
issues should follow the largely autonomous disaster recovery lead of
companies like Commerzbank and Telus, or consider enlisting services such as those provided by SunGard and IBM,” King said.

Like electricity, utility computing is a service provisioning model through which computing resources and infrastructure management are delivered to customers by a service provider on an on-demand basis, with the customer charged for specific usage rather than at a flat rate.

The model bears many similarities to other on-demand computing models like grid computing and autonomic computing. Because the computing resources and infrastructure are
supplied by a provider which often has numerous data centers around the world, if one data center loses power or is affected by some other
disaster, the service can rapidly be provisioned from another data center.

“It validates the model rather than discredits it,” Ahmar Abbas, analyst with research firm Grid Technology Partners, said of utility computing’s performance during the outage.

“The way a utility computing model reacts to a minor failure that ends up being a huge failure is totally different [than how a power grid reacts], primarily because there are a lot of built-in mechanisms that are constantly observing the load on systems and will bring up additional capacity as necessary,” Abbas said.

That sentiment is echoed by Brian Fowler, director of Hewlett-Packards’s Business Continuity Business. HP and rival IBM are two of the biggest providers of utility computing services.

“I would say that business continuity is an essential component of grid [and utility] computing,” Fowler told internetnews.com.

Fowler added, “We have a business recovery center in Philadelphia, about 50,000 square feet, dedicated to business recovery services to contracted customers. We had one customer, who mirrors their data into our recovery center; when the power outage happened for them, they didn’t have diesel generators, so they switched over to our recovery center and were running their production environment out of our recovery center for a couple of days until their power returned.”

“We have those types of centers around the globe,” Fowler added, noting that the company has 50 recovery centers in 35 countries.

That type of geographical dispersion is one of the key reasons that utility computing is becoming an important tool for disaster recovery and business continuance, Abbas said.

“In a utility computing environment, you should be able to bring up
additional capacity and servers in an on-demand fashion from any
environment, not specifically the geographical environment that you
initially signed up for,” he said.

Fowler added: “From an Adaptive Enterprise point-of-view, business
continuity is one of the solutions that we’ve identified as critical to
have. “IT infrastructure has to be flexible. It also needs to be robust,
secure and scalable. What we have in HP, in our Business Continuity
Solutions, is a range of solutions that cover different recovery times that
customers are looking for.”

HP offers a range of business continuity services, from the high-end, in
which customers need to recover in minutes, to the other end of the scale,
which covers less important applications that a customer could go without
for a few days, if necessary.

“I think you clearly have to design continuity into the initial IT
infrastructure that you’re trying to deploy,” Fowler said. “Whether that’s
UDC [Utility Data Center] or non-stop computing — whatever you’re looking to set up — the better you understand what the business requires of you as an IT operator in terms of recovery time, then you can design recovery into
your architecture.”

He added, “I think customers are listening to that. They were much better
prepared for this type of event than they were prior to the 9/11 incident.”

News Around the Web