InfiniBand Faces a New Hurdle

Henry Newman Sometimes I have written articles with the best of intentions, only to discover at some point in the future that I’d like to retract what I said. This is a hazard of what I do, which is to try to anticipate storage trends, so I’m no stranger to being wrong. I just try not to stay wrong. And my prediction that InfiniBand (IB) could become the storage interconnect of choice is one I’d like to revisit.

The more I look at our industry, the more I see products becoming successful because they work within the broader market. I remember about six years ago, I was asked by someone if Fibre-on was going to make it. For those of you who don’t remember, Fibre-on was the concept of placing Fibre Channel (FC) chipsets right on the motherboard for PCs and blades so no HBA was needed.

I thought this was a great idea, but it crashed and burned pretty fast for a number of reasons. For starters, Fibre Channel disks were more expensive than SATA, and those who purchased systems in that price range wanted cheap storage. And second, the cost of the Fibre-on systems was significantly higher than the same system with 1 Gbit Ethernet. Even with Fibre-on, Fibre Channel was not a commodity product like 1 Gbit Ethernet.

InfiniBand is another promising technology that could run into market realities. Recent developments suggest to me that IB may not make it within the storage or networking hierarchy. This is not to say I want IB to fail, but is more my observation of what is happening within the market. Products that have a market presence don’t necessarily fail; they seem to fade away, with smaller vendors taking over from large vendors. That is what happened with HiPPI, the High Performance Parallel Interface. Whether I am right or wrong, IB will be around for a number of years, since too many vendors have too many products depending on this technology, and in the short term, the Ethernet community doesn’t have an immediate alternative to IB, but change is in the wind.

Storage and Networking

The concept of having different technologies for storage and communications between computers goes back long before my time and has evolved into what we have today: at its most basic, Ethernet for computer to computer communications, and Fibre Channel for computer to storage communications. Some of your might remember that in the late 1990s and the early part of this decade, the Fibre Channel community tried and failed to get Fibre Channel adopted by the network community by adding TCP/IP support to Fibre Channel. This failed miserably, and the reason was the cost per port and the cost of the HBAs was just too high compared with the then-emerging 1 Gbit Ethernet.

So we have two different technologies with two different sets of requirements. Fibre Channel is very good at getting the packet to where it’s supposed to go, but doesn’t have many of the concepts that Ethernet has to ensure that things get retransmitted in case a packet goes awry. Conceptually, storage networking and TCP/IP networking have different requirements and different goals and objectives. You can’t place a square peg in a round hole, but I like to say that you can pound it in, and you get splinters when you do. With this division of networking, you have seen the cost of ports, NICs and other parts of the communications part for Ethernet drop dramatically. If you compare the cost of these items to Fibre Channel or IB, you will see huge cost differences at every point in the data path. For sites with hundreds or thousands of ports, these costs add up.

So what does this all have to do with IB? IB is trying to combine storage and communication into one network. The concepts for IB came from the need of large clusters to have low-latency, high-speed communication for applications run across the cluster. Given the need for low latency, the communication between the sending and receiving computers had to be DMA. All of this was accessed by the application initially with PVM (Parallel Virtual Machine) and now with MPI (Message Passing Interface).

So initially, IB was just about fast communications between computers, but a few years ago, given the size of IB networks, a number of U.S. government laboratories began asking their RAID controller vendors if they could build an IB interface for the RAID controller so they did not have to have an FC network for storage and an IB network for communication between machines. A number of RAID vendors happily complied to satisfy the needs of some of their biggest customers, and the IB community responded with SCSI over IB.

Fast forward to today and IB has become the standard for clustered interconnects, making inroads in a number of smaller clusters. In the next five years, IDC expects to see about 350 percent growth for HCAs and about 600 percent for switch ports.

FCoE Could Rock the Storage World

I see a problem with IDC’s analysis, and that is the new T11 standard for FC over Ethernet, called FCoE. For a good overview, see this IBM/T11 paper. Think about it: if you can run storage networking over your 10 Gbit Ethernet network at a cost per port far less than either FC or IB, and you can use your standard networking and potentially reduce personnel, what would you do?

As with anything. there are winners and losers. A few years ago, IB was dominated by niche players, but with Cisco buying Topspin and QLogic acquiring Pathscale, there has been some significant consolidation in the market. Storage networking, for the most part, has been a small part of the total networking market, so some of the big Ethernet switch vendor did not care about it. Cisco purchased an FC switch company — Andiamo — back in 2002 and got into the storage networking market, but they were one of the few traditional networking companies to do so. Fast forward to today, storage networking is growing at a significant rate and networking companies want a part of it. The easiest way to accomplish that, in my opinion, is not to start building FC or IB switches, but to change the standard, and I believe the companies are thinking the same way.

Less than a year ago, I thought that IB might be one of the interconnect winners in the future, but I now have serious doubts because of market forces, my own evolving understanding, and FCoE. This is not to say that IB will be here and gone in a year, two years or even five. IB has a much broader market presence than technologies like HiPPI did in the 1990s, but that is not to say that IB will be with us forever.

Let’s say for argument’s sake that iWARP, which is a DMA protocol over Ethernet, has bandwidth similar to IB, but instead of a communications latency of 4 microseconds for IB, iWARP has 7 microseconds. Let’s also say that the cost difference is, say, 30 percent more for IB. The market question then becomes what percentage of the market will be satisfied with iWARP and Ethernet, and what percentage will pay the additional cost for IB’s lower latency. We all know based on history that a good percentage of the market will be heading down the Ethernet and iWARP path, likely further driving up the cost of IB as the market either shrinks or stays the same. History also tells us that the big players often leave the market to the niche players, which historically have charged more for the same products.

IB serves a need that is not available via commodity products today. The question is whether the commodity products of tomorrow will have some of the features of IB, thus leaving the cost of IB in the dust. Time will tell, but the writing may already be on the wall.

Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 26 years experience in high-performance computing and storage.
See more articles by Henry Newman.

Get the Free Newsletter!

Subscribe to our newsletter.

Subscribe to Daily Tech Insider for top news, trends & analysis

News Around the Web