There has been a lot of debate over the past year about how storage virtualization should be done and who has the best technology. Some say it should be done in the switch, some prefer the array, others insist that it must be contained within an appliance, and a growing number believe it belongs in all three.
Vendors have taken their own positions. IBM came out the gates ahead of the rest with its SAN Volume Controller (SVC) appliance about two years ago. Hitachi Data Systems (HDS) followed up last year with its TagmaStore Universal Storage Platform (USP), an array-based option. And in recent months, EMC Invista has gained plenty of press for switch-based solutions. So which technology and which vendor are best? And which approach will ultimately win the storage virtualization race?
“It’s very hard to compare storage virtualization technologies, as they are mostly theories at this point,” says Rick Villars, a storage analyst for International Data Corp (IDC) of Framingham, Mass. “We will need time to see how well they deploy in the real world.”
Slowly Virtual
Despite the hype, the corporate world has been slow to adopt storage virtualization technology. According to an IDC study of 269 IT managers in companies of all sizes, only 8 percent are doing any virtualization at all. An average of 23 percent plan to implement some in the next 12 months.
If you focus on companies with 10,000 or more employees, usage rises to 19 percent, with 31 percent planning to add a virtual component within a year. In the mid-sized segment (1,000 or more employees), very few are using it, although 33 percent state a desire to harness the technology before the end of 2006.
Within these various camps, then, there are varying pressures and needs at play.
“Mid-size companies mainly want to manage data migration and reduce their administrative burdens,” says Villars. “Larger shops want virtualization for functions like data replication and volume management for provisioning.”
Differing Approaches
Traditionally, there have been three distinct camps in this field. On the appliance side there is SVC, StorAge, Network Appliance and DataCore. Within the array/fabric there are varying architectures offered by HDS, Sun, HP and Acopia. And in the switch, there are Invista, McData, Brocade, QLogic and Cisco.
McData, Brocade, Cisco and others, however, have made acquisitions or partnerships aimed at fabric-based virtualization, so it appears that the lines among the categories are already beginning to blur. And some of the other vendors mentioned above are now straddling at least two camps, if not extending beyond these rigid bounds.
Switch and array advocates, however, are on the attack, targeting the performance and flexibility of appliances and early virtualization engines.
“Initial implementations of storage virtualization relied on discrete solutions based on off-the-shelf components or port-based processing engines that provided the functionality required,” says Amar Kapadia, director of product management at Aarohi Communications. “The appliance approach is considered easy to deploy but tends to be application-specific.”
Aarohi believes the next generation of storage virtualization is embodied in intelligent SAN components such as its AV150 Intelligent Storage Processor. The company has formed a partnership with switch vendor McData to create fabric-based virtualization services.
HDS makes a similar attack on appliance and switch solutions.
“The Universal Storage Platform places virtualization in the storage controller at the edge of the storage network instead of at the host or in a switch or appliance at the core,” says James Bahn, director of software at HDS. “This is the best place for performance and security reasons.”
Meanwhile, Network Appliance is of the opinion that storage virtualization is best done on the network via an appliance.
“This provides customers with the most flexibility in array choice, doesn’t lock them in as an array-based solution like TagmaStore, and does not require all the complexity and costs of host-based virtualization solutions with client code,” says Jeff Hornung, vice president of storage networking at NetApp. “The appliance can be in-band or out of band within in the network.”
Who’s on First?
While no one has established firm market dominance, has anyone at least managed to bunt onto the equivalent of virtualization first base? IBM appears to have the most sales to date. Enterprise Strategy Group founder and senior analyst Steve Duplessie reports that SVC has achieved more than 1,500 systems sold.
“IBM is best placed to make the most of virtualization technologies, if it can crack how to make them provide a consistently defined and managed service across their product portfolio,” says Jon Collins, an analyst with UK-based Quocirca.
Cisco, too, may be gaining traction with its recent Topspin acquisition, with the ability to link server, storage and networking virtualization.
“Topspin was one of those acquisitions that could change a company,” says Collins. “If Cisco chooses to fully embrace virtualization capabilities, they’d have a pretty compelling result.”
Cisco, though, remains largely on the outer rim of the storage galaxy.
“Its challenge is that all the intellectual property on replication, provisioning and other core storage functions lies in the hands of storage vendors,” says Villars. “Cisco needs to add value to gain more ground.”
One sleeper in the race is Microsoft. The company has quietly been establishing itself as a storage force over the past two years, and recently overcame some licensing hurdles that stood in the way of virtualization.
“Microsoft may be late to the party, but it is probably going to come out with some impressive technology,” says Collins. “Microsoft will make virtualization part of the server operating system.”
Virtual OS
Just as the boundaries are fading between storage virtualization categories, they may also be blurring between storage and server virtualization. In addition to Microsoft’s efforts via Windows Storage Server 2003, NetApp has added virtualization capabilities to the Data ONTAP OS in its V-Series (formerly gFiler) arrays.
“Virtualization software is becoming more robust and more tightly integrated,” says Villars. “It is evolving into more of an overall operating system.”
Collins agrees. He thinks that the argument of where to best accomplish virtualization — in the switch, the array or appliance — is false. It should be done in all of them, and united by one overarching virtualization layer, he says. It is an enabler rather than a technology in its own right.
“Virtualization is about adding a management layer to enable a resource to be controlled more transparently,” he says. “In 10 years time, we’ll look back and say ‘wow, we re-invented the operating system’ — admittedly a hyper-distributed, enterprise-wide OS, but an OS nonetheless.”
Virtualization, then, may be morphing into one element of distributed operating system for servers, networks and storage, with each of the three being virtualization-aware. But virtualization in only one of them could get you into trouble. On the server side, for example, some initial server virtualization projects caused problems with storage addresses and other advanced functions of storage management. For virtualization to work properly, server virtualization must leverage storage virtualization capabilities or it will run into a roadblock.
Similarly, network devices or storage switches can employ all kinds of clever packet inspection techniques to understand the nature of the data that is being transported and make decisions about how to deliver or store it efficiently. While the network can know that a given stream makes up a JPEG and that it may be useful to cache it, it cannot tell the difference between an X-Ray and a pornographic photo. And virtual stores or virtual server pools can only go so far in terms of interpreting what they are for — for example, a server pool may choose to allocate extra processing to a certain application when other applications are running idle, but it can’t necessarily tell the difference between a payroll run and a denial of service attack against the server.
“It is important to consider virtualization within each of the three areas, but also to incorporate management tools that understand the need at the application layer and can make virtualization decisions accordingly,” says Collins.
Such dreams, though, are a long way off — perhaps three years out, according to Villars.
For more storage features, visit Enterprise Storage Forum Special Reports