dcsimg
RealTime IT News

Trade Group: Infiniband Alive and Well

NEW YORK -- Members of the Infiniband Trade Association (IFTA) met here Tuesday in order to clear up any misconceptions about the technology the group represents. The message? Infiniband is alive and well, according to group co-chair Dr. Tom Bradicich.

Since the first version of Infiniband -- a high-speed interconnect that promises to pipe data at 10 gigabytes-per-second (GB) -- appeared in 2000, the technology has been praised for its potential to eliminate latency, among other things. The problem was, however, that original equipment manufacturers were slow to embrace the technology, which could be used to replace such standards as PCI and Ethernet .

At a time when IT managers are pressured to reduce total-cost-of-ownership and increase return-on-investment, Infiniband appears to be a saving grace in the industry. However, its potential has not materialized. In the last year or so, Microsoft and HP have backed off promises to back the standard, claiming that customers would not be willing to rip-and-replace their current architectures in favor of new ones. This caused industry watchers to wonder about its viability. The perception was, if the major vendors would not back it, why should smaller OEMs ?

Bradicich, who doubles as chief technology officer of IBM's xSeries servers, expounded on the virtues of Infiniband at the Museum of the City of New York to reinforce the group's devotion to the technology. He was joined by representatives from IFA member companies Intel, Sun Microsystems, Mellanox, Topspin, JNI, Infinicon, Sanmina-SCI, SBS Technologies, Voltaire, Agilent Technologies and Fabric Networks.

Bradicich explained that Infiniband is a more efficient way to connect storage, communications networks and server clusters. It has what he described as the "Fab 4" of interconnect characteristics: it's an open standard; it runs at 10GB; it serves as an offload engine; and it features Remote Direct Memory Access (RDMA), a network interface card (NIC) feature that lets one computer directly place information into the memory of another computer. InfiniBand's high-bandwidth, low-latency fabric can offer major improvement to application performance.

"Infiniband really builds on the mistakes of other technologies that are un-interoperable," Bradicich said. "The pressure is on to do more with less as the slow economy pressures IT managers' budgets."

Bradicich said despite the slow economy, IT managers are being pushed to set up infrastructure in their data center. Whereas, a company might have handled 10,000 transactions per-minute a few years ago, it now must handle 20,000, he said. Bradicich also said the fact that it is becoming more and more common for a group of entry-level servers to complete the tasks of their high-end brethren means high-performance interconnects such as Infiniband are becoming more desirable. The increase in cluster computing is driving Infiniband, he said.

"We see a key trend where the cost of increase from midrange to the high-end is often greater than the actual performance from the increase," Bradicich said. "Infiniband allows you to get a lot of power out of a single processor, scaling out instead of up so you pay less." Normally associated as mainframe characteristics, such scalability is considered highly desirable, he said.

Besides speed and scalability, Infiniband also cuts down on some of the "monstrous cabling" the data center is known for. Calling it a "three-piece suit," Bradicich said implementations of the technology often require the server, an Infiniband switch, and an Infiniband Host Channel Adapter (IHCA).

Bradicich later showed a video where a systems architect at the University of Washington said he was using Infiniband to speed up streaming media services in the school's library. Bradicich also called upon an architectural engineer from Prudential Insurance to discuss how he was using the technology to create a convenient data warehouse.

The engineer, Don Canning, praised what Infiniband has done for IT consolidation at Prudential. After discussing how the technology has helped him consolidate infrastructure at his company, Canning made a request. He asked the IFTA members to get Infiniband working in tandem, so that if a server node system went down, the Infiniband traps the nodes' I/O identity so his team does not have to reboot, causing the business to lose users.

Bradicich said the industry can expect to hear additional Infiniband progress in late 2003, early 2004.

Some experts think the technology is promising.

According to a research note from analyst group the 451: "All the indicators are that InfiniBand is not going to thrive as a general- purpose networking transport, as a peer to the established Ethernet or fiber channel. But it is likely to be used for server-to-server interconnects, linking multinode systems such as blade servers, symmetrical multiprocessors or high-node clusters. This sort of technology will be driven by the major server vendors, and so the startups with technology to contribute need to align themselves with the big systems vendors."

Still, questions about the uncertain future of Infiniband are valid as companies have been slow to embrace the young technology. But in the past few months, vendors have been unveiling their Infiniband plans, providing specific details of their results. The tide may be turning.

The fact is, believer companies are working hard to bring products, and even programs, to bear. For example, Fabric Networks, formerly the InfiniSwitch, Tuesday launched a "IBM DB2 Technology Leaders Program" for switched fabric networking at the event, marking the first time 10Gb/s InfiniBand technology has been run natively on Windows-based servers. Also, Sanmina-SCI Tuesday introduced an InfiniBand platform to support OEMs and systems integrators, including 10Gb/s (4X) InfiniBand switches, 10Gb/s (4X) Host Channel Adapters, software and cables.

To be sure, Infiniband has already sparked some high-profile pairings. During a partnership extension event in April, Dell and Oracle said their engineering teams are testing the standard. Oracle said its labs are running InfiniBand performance tests on Dell systems that show more than a doubling of interconnect performance, which translates to higher reliability and better response times. The tests are progressing to the point that Oracle pledged to include full InfiniBand support in the next major release of Oracle9i Database.

Earlier in March, Sun teamed with Topspin to say that they are creating Infiniband-based blade servers. Specfically, Sun will take Topspin's fiber channel and Gigabit Ethernet I/O modules and embed them within its next-generation servers. Its aim is to boost application scalability, performance and resource utilization. Topspin will develop Infiniband support for Sun's Solaris operating system.

Meanwhile, IBM is preparing to launch a set of InfiniBand-supporting products for its Intel-based eServer xSeries line, including a host channel adapter that plugs into existing PCI slots, an InfiniBand switch and a software stack. Bradicich believes initial interest will come from users putting together DB2 and Oracle database clusters.