The Remote Direct Memory Access
Consortium (RDMA) — the group responsible for creating architectural
specifications for products that combat data latency
TCP/IP
inaugural version of its spec.
The Chicago-based RDMA, working as a complementary group to the Internet
Engineering Task Force (IETF), was launched in
May to accommodate demands for increased networking bandwidth and
speeds.
Specifically, RDMA helps eliminate cumbersome data copy operations and
reduces latencies by allowing one computer to directly place information in
another computer’s memory with few demands on memory bus bandwidth and CPU
processing overhead. Simply, RDMA smooths the passage of network data, and
improves communications in the process.
The RDMA over TCP protocol, it follows, reduces the overhead burden on
processors and memory. This frees up ultra-taxed processors for other
duties, such as user applications. The RDMA said another opportunity is the
ability to converge functions in the data center over fewer types of
interconnects. This makes the infrastructure less complex and easier to
manage.
The RDMA was born out of concern for the future of network data interchange,
as the future moves closer to 10 gigabit Ethernet and data centers get
challenged: right now, the ratio of CPU power to network speeds are closing
on one another fast, which will mean more pesky latency. The group also
argues that current proprietary RDMA products are unsatisfactory and that an
interoperable standard is needed.
John Gromala, Manager of Technology Strategy & Marketing for HP Industry Standard Servers and HP’s spokesman on behalf of SDMA, called the RDMA technology the IP fabric that improve the utliization of the data center as a valuable tool to house data for businesses.
“Over time, this will open up new server designs that feature less interconnect,” Gromala told internetnews.com. “This will lower operating costs for the businesses. In the end, IP fabric ends up being the single unifying piece.”
Gromala, who expects first generation RDMA products to see the light of day in 2004, said the RDMA technology improves the end-to-end efficiency and scaling in those networks, as well as communication between servers.
“From an IP consolidation perspective, we like to compare this technology to the reason why people are buying SUVs,” Gromala said. “RDMA has fewer interconnects, reduces complexity and lowers cost.
Those that think RDMA doesn’t sound much different than the problems
addressed by InfiniBand or VI Architecture would be correct. All three
architectures specified a form of RDMA and have strong similarities. While
VI Architecture’s goal was to was to specify RDMA capabilities without
specifying the underlying transport, InfiniBand improved upon the RDMA
capabilities of VI and specified an underlying transport and physical layer
that are optimized for data-center class traffic. RDMA over TCP/IP will
specify an RDMA layer that will interoperate over a standard TCP/IP
transport layer. RDMA over TCP does not specify a physical layer; it will
work over Ethernet, wide area networks (WAN) or any other network where
TCP/IP is used.
RDMA, whose founding members include Adaptec, Broadcom,
HP,
IBM,
Intel,
Microsoft
and Network Appliance,
said version 1.0 of the wire protocol specs are suitable for
first generation industry implementations of the RDMA over TCP protocol and
have been forwarded to IETF working groups as Internet Drafts for their
consideration. RDMA is also working on companion specs, which are expected
to roll out in the first quarter of 2003.
For technical treatments of the consortium’s goals and motivations, please
see this white
paper.