Virtualization Brings New Data Recovery Concerns, Benefits

SAN FRANCISCO—More than 10,000 attendees swarmed the VMworld 2007 Conference Wednesday, mixing and mingling among dozens of vendor booths scattered throughout the bowels of San Francisco’s Moscone Center.

The fact that the sixth anniversary of the 9/11 terrorist attacks fell smack dab in the middle of the three-day conclave thankfully appeared to be irrelevant to participants more concerned with ESX servers, lei-dispensing booth bunnies and the prospect of one-click provisioning software for their beloved virtual machines.

But there’s nothing virtual about the threat posed to corporate datacenters by another terrorist attack, a garden-variety fire, an extended power outage or even something as seemingly benign as a rogue sprinkler system.

Lost in the all hoopla surrounding virtualization and its inherent benefits—less power consumption, fewer servers in the datacenter, optimized server efficiency, etc.—is the daunting challenge datacenter managers now face when it comes time to plan, test and—hopefully never—execute a disaster recovery program to salvage the business-critical applications residing within their increasingly virtualized datacenters.

Companies looking for a solution to the new types of problems created by incorporating both physical and virtual servers into their datacenters really only have two choices: hire a services provider like SunGard or IBM  to host and manage snapshots of your server environment and provide additional capacity when needed, or do it yourself with some help from another software vendor.

“The more you put in one basket, the more things that can go wrong with that basket,” Don Norbeck, director of product development at SunGard, told a couple hundred attendees during his “virtual recovery” presentation. “Everyone wants to lessen hardware and reduce power consumption and space. But people need to take a step back and assess their environments and infrastructures before they start their virtualization projects.”

The idea of getting more bang out of your server buck by having existing physical servers use some of their excess capacity to host additional applications and operating systems isn’t a Web 2.0 phenomena. It’s not even an Internet idea.

Though it may come as a quite a surprise to some of this year’s attendees, virtualization has been around for more than 40 years. The rub had always been simplifying the process to hide the complexity required for this orchestration to take place.

But after VMware  raised almost $1 billion in one day from its initial public offering and, the next day, Citrix Systems  forked over $500 million to acquire XenSource, the virtualization buzz reached its crescendo.

“For most of the people here today and most of the companies who have embraced virtualization, the ROIs were based solely on consolidation,” Norbeck said. “What’s really made virtualization interesting in the past couple years is portability, the ability to move your workload from one location to another.”

This portability of applications and operating systems from virtual machines to virtual machines or virtual machines to physical machines or any other combination of the two is also the source of much angst for datacenter managers responsible for their companies’ data availability and data recovery plans.

Because all large corporations—and most small- and mid-sized firms—didn’t have the benefit of a crystal ball, applications and the operating systems running those applications grew in a staggered, chaotic fashion and can’t always be configured, provisioned or moved around in a tidy, virtualization-friendly box.

Even today, data stored on some servers can only be reincarnated on an identical server after a catastrophe like a hurricane or a building collapse destroys the physical server.

Such an event is exactly why digital tape is still most companies’ data recovery option of first and last resort. But tape is one big headache. It’s a manual process involving hours of tedious indexing and labeling and companies have to find someplace or pay someone else to physically store all these tapes somewhere, defeating at least one purpose of virtualizing your datacenter in the first place.

“Tape give you a data-only view of the world,” said John Stetic, founder and senior director of product and services at Toronto-based PlateSpin, during his “virtualization as recovery platform” presentation.

“The data is backed up but the systems are not. How do you effectively recover an entire server? The data is helpful but what about the tuning parameters for a SQL server, the patches and the database drivers? Everything that goes around that data is important. The more you have to do, the longer it takes to recover your environment,” said Stetic.

PlateSpin’s PowerRecon software basically does an inventory of a datacenter’s physical and virtual machines, telling users how much processing capacity is required to keep them humming along and helps locate excess capacity so IT managers know where and how to begin the process of consolidating their datacenters. Its PowerConvert software streams workloads between physical servers, virtual machines, blade servers and back-up archives.

“The biggest challenge of whole system recovery is resolved because you no longer need the same type of hardware to restore your data,” Stetic said. “Now this recovered workload is running just fine in a virtual machine. And if the original physical server is destroyed, you don’t have to replace it. It’s accidentally become part of your consolidation plan.”

Get the Free Newsletter!

Subscribe to our newsletter.

Subscribe to Daily Tech Insider for top news, trends & analysis

News Around the Web