SAN JOSE, Calif. — The InfiniBand Trade Association (IBTA) Wednesday assembled its various members for a demonstration of interoperability with an eye towards the enterprise.
The Portland, Ore.-based group of more than 190 companies is focused on bringing balance to data centers with shift to a switched fabric- based I/O architecture that it claims speeds up communications between servers and networked devices at a rate of 10 gigabytes-per-second. The majority of the group is expecting to bring the software platform to market this year (more on that later).
Led by Intel , IBTA members ran tests using an InfiniBand open-source Source Forge Linux software stack on a combination of Intel Xeon and Itanium II processor based server platforms at the Intel Developer Forum here. Ten InfiniBand vendors including Einux, FCI, IBM
, Intel, Mellanox, Molex, Topspin, and Tyco supplied the various software components.
“There is a growing market opportunity for our solution,” said Intel Enterprise Platforms Group program manager Allyson Klein. “Ethernet-based technology has a drop off — InfiniBand does not. We started this program years ago and put it on a road show in nine states and that will continue through the end of this year.”
IBTA says 10Gb InfiniBand is now in final testing, transport APIs are in place, HPCC and database cluster solutions are expected soon and there has been extensive engagement with server, storage, application and infrastructure vendors. In addition, Klein said early adopter deployments are going well and IT engagement is ramping up.
“The industries said we need an implementation we can use without rewriting the code,” Klein said. “Blades will also certainly be an opportunity for InfiniBand as we are learning and growing with the commercial market.”
The group announced its 1.0 specification back in October 2000. The most recent version of the specification, 1.1 was released in November 2002.
While InfiniBand is currently being installed at government sites like the Los Alamos National Laboratory, the technology has also recently emerged as a leading technology alternative in embedded applications, expanding the reach of the architecture to new markets.
Two crucial focus areas for InfiniBand are Database Clustering and High Performance Computing (HPC)
With regard to database clustering, the community presented a “carcrawler” demonstration from IBM featuring a DB2 cluster running on Intel Xeon and Itanium II processor based servers connected together via InfiniCon InfiniBand systems and showcasing scalability potential to 1000 nodes. Topspin showcased a DB2 cluster that used InfiniBand architecture as a unified fabric for I/O, IPC and storage traffic. At the same time, Einux demonstrated the scalability of dense Intel Xeon processor based servers within an InfiniBand fabric showcasing 16 1/2 U Einux servers with a capability of up to 80 servers per rack.
The InfiniBand community featured two demonstrations of InfiniBand fabrics at work in HPC environments. Appro and InfiniCon teamed to show how InfiniCon’s InfinIO 7000 Shared Clustering System highlights the scalability of InfiniBand fabric connectivity. KSL’s demonstration featured the InfiniBand architecture’s RDMA capability for HPC environments.
How it Works
InfiniBand has been gradually replacing the PCI bus in high-end servers and PCs. Instead of sending data in parallel, which is what PCI does, InfiniBand sends data in serial and can carry multiple channels of data at the same time in a multiplexing signal. The principles of InfiniBand mirror those of mainframe computer systems that are inherently channel-based systems. InfiniBand channels are created by attaching host channel adapters (HCAs) and target channel adapters (TCAs) through InfiniBand switches. HCAs are I/O engines located within a server. TCAs enable remote storage and network connectivity into the InfiniBand interconnect infrastructure, called a fabric. InfiniBand architecure is capable of supporting tens of thousands of nodes in a single subnet.
InfiniBand supports both copper wire up to 17 meters and fiber optic cabling up to 10km. Transmission rates begin at 2.5GBps.
Dissention in the Ranks
Founded in August 1999 by seven companies: Compaq, Dell Computer , Hewlett-Packard
, IBM, Intel, Microsoft
, and Sun Microsystems
, the current Steering Committee companies include Dell, HP (which acquired Compaq), IBM, Intel, and Sun. InfiniSwitch, Lane 15 Software, Mellanox Technologies and Network Appliance
round out the decision makers.
There have been reports of infighting between some of the founding companies. HP, for example, told internetnews.com that it was lukewarm to the IBTA’s direction and is instead “watching to see which direction InfiniBand will be going to be in the next six months.” Microsoft has decided to discontinue developing native InfiniBand support, and in a recent statement said it would, “continue to enable third parties to deploy Windows IB solutions.”
Klein says Intel remains confident in IBTA’s direction citing that this is the fourth official InfiniBand community at the Intel Developer’s Forum.
Intel Senior Vice President and General Manager of the Enterprise Platforms Group Mike Fister is expected to acknowledge Intel’s involvement in the InfiniBand strategy during his keynote Thursday.