Mellanox Delivers 10 Gb/sec InfiniBand Server Blade Reference Design

Mellanox Technologies today announced the availability of its Nitro II 4X (10 Gb/sec) InfiniBand server blade reference platform. The Nitro II platform utilizes Mellanox’s second-generation 10 Gb/sec InfiniHost host channel adaptor (HCA) and InfiniScale switching silicon. The Nitro II platform consists of 2.2 GHz Intel(R) Pentium 4 processor based diskless server blades, dual 16-port 4X switches, and a 10 Gb/sec backplane supporting 480 Gb/sec of switching capacity in a compact 14 blade chassis. The combination of high performance processors, large memory capacity and second-generation InfiniBand silicon offers OEMs and developers a 10 Gb/sec development platform for optimizing the performance of clustered databases and other such data center applications.

“InfiniBand diskless server blades create a whole new class of data center solutions that provide two key server improvements. First, Nitro II blades provides more than 3 times the existing CPU performance available through the use of a 2.2 GHz Intel processor versus the 700 or 800 MHz speeds offered in current blade technologies,” said Yuval Leader vice president of system solutions, Mellanox Technologies. “Secondly, the InfiniBand architecture enables CPU, I/O, and storage sharing across all data center systems; rather than the duplication and isolation of these essential resources in each and every server or server blade. This allows data center managers the ability to scale, provision or redeploy CPU, I/O or storage resources individually, on an as-needed basis.”

Nitro II, like Mellanox’s first-generation InfiniBand server blade reference design released in January 2002, provides ground breaking levels of integration, ease of use, performance, I/O sharing and management capabilities delivering lower TCO benefits for enterprise and Internet data centers.

“Mellanox is again providing leadership by utilizing industry standard server components that demonstrate the winning combination of InfiniBand and high performance server blades,” said John Humphreys, IDC Senior Research Analyst Global Enterprise Server Solutions. “IDC sees a tremendous market for server blades and projects by 2006 that over 1 1/2 million servers will be in a blade format. IDC believes that the InfiniBand architecture has a distinct opportunity to play a key role in the development of server blades.”

“InfiniBand connectivity will emerge in Intel Architecture platforms early next year, with blade servers as an important initial implementation. Reference designs like Mellanox’s Nitro II Platform will help accelerate InfiniBand architecture based blade delivery,” said Jim Pappas, director of initiative marketing, Intel Enterprise Platform Group. “We look forward to working closely with Mellanox in delivering InfiniBand capability to our server platforms.”

Nitro II InfiniBand Architecture
The Nitro II server blades are based on the Mellanox second-generation InfiniHost HCA, an Intel 2.2 GHz Pentium 4 processor and the ServerWorks Grand Champion chipset. The server blades support up to 4 GB of memory and are both diskless and headless (no video monitor required). Mellanox’s InfiniHost low latency hardware transport overcomes the latency and bandwidth penalties of LAN based remote storage, therefore eliminating the need for local storage on the server blade. Remote booting capabilities allow InfiniBand server blades to access all OS, applications and other software images from either NAS or SAN storage. In addition, the absence of a local disk improves reliability, lowers cost and enables more power for improved CPU and memory performance.

Dual 16-port 10 Gb/sec switch blades offer a combined throughput of 640 Gb/sec. The switch aggregates twelve 4X ports from the backplane to four 4X uplink ports on the front of the chassis. The four 10 Gb/sec uplink ports can be used to connect multiple chassis’ together to create large clusters of server, I/O or storage blades.

The passive backplane utilizes a dual star configuration to link 12 server or I/O slots through redundant InfiniBand fabric switches. The 24 total 4X InfiniBand backplane connections can carry 20 Gb/sec each allowing for a total aggregate bandwidth of 480 Gb/sec. The compact 4U chassis enables a total of up to 96 server blades in a single rack. The InfiniBand fabric also provides dedicated management lanes for chassis and baseboard management, and support for keyboard, mouse, power and management traffic; thus greatly reducing the number of cables required for server clusters.

Get the Free Newsletter!

Subscribe to our newsletter.

Subscribe to Daily Tech Insider for top news, trends & analysis

News Around the Web