Linux Ready For Real Time on Wall Street?

NEW YORK — With the trillions of dollars in financial transactions that
take place on Wall Street each year, there is little to no tolerance for lost
trades. So how do you minimize the risk? The answer, according to Red Hat, could well be a real-time Linux solution.

Tim Burke, director of emerging technologies at Red Hat , took
the stage at the Linux on Wall St. conference and provided the suit-and-tie
audience with a real business case for Real Time Linux, the next evolution of Linux.

“All too often people have too narrow a view of what real time is,” Burke
said. “The term “real time” means many different things to different people: Some think it’s just for medical or military applications but we’re
talking about [the] financial services sector and high-performance trading.”

Burke noted that with the currently under development Red Hat Enterprise
Linux 5 (RHEL) Real Time edition, Red Hat will be able to deliver a
deterministic operating system that will be of great benefit to the
financial services industry.

Running a financial industry messaging benchmark, Burke showed a performance
slide demonstrating a dramatic improvement of running Real Time RHEL 5 over
Standard RHEL 5. Financial-services transactions must be complete within a well-known finite amount of time as per SEC regulations.

With standard RHEL 5 Burke showed a slide in which there were a few spikes
of greater than 200 milliseconds (ms) of latency. That type of latency could well translate
into a missed trade.

“With Real Time RHEL the number of transactions that had more than 5 ms of
latency is only two out of 1 million,” Burke said. “It’s 13,000 out of a
million on regular RHEL 5. An almost trade is not acceptable, and that’s what this is all about.”

What real time provides is predictability in Linux response time. Real time
provides more fine-grained determinism than what the traditional stock Linux
kernel has been able to provide.

Burke said that Red Hat has been at the forefront of getting
real time into Linux, but it’s an effort that is not without challenges.

“The big challenge is to remove black holes in the Linux kernel,” Burke
said. “Places where the kernel runs in a long period of time in an
uninterruptible manner.”

There is a long list of changes that are expected to be complete by the
time the Linux 2.6.23 kernel is complete. Burke said that the changes for real time in the kernel are colossal, encompassing some 1.2 million lines of code.

In addition to the real-time Linux enhancements, RHEL 5 Real Time will
also include a new latency tracer application. Latency tracer will be a
runtime trace capture of the longest latency code paths in both the Linux
kernel and in the application.

Moving to real time won’t be painful for enterprises, either. According to
Burke, the real-time enhancements are all under the hood and no application
changes required.

“Regular applications will run unmodified which helps to ease transition, as
well as making it easier for benchmarking.”

Despite all the goodness that real time promises, Burke cautioned that it is
not a silver bullet for all that ails slow Wall St. apps.

“If there are inefficiencies at the app layer, there is nothing the kernel
can do about poorly coded programs,” Burke said. Real time could also potentially slow down certain type of long-running applications, as well.

Burke said that when enterprises think of
throughput, they really want to put through the greatest number of
transactions over a period of time. The most efficient way to do that is to
complete each task in as long a period of time as it takes to get it done without interruption.

Real time, in contrast, involves breaking longer tasks into shorter tasks in
order to prioritize them. “When you break up longer tasks and shorten them, there will be some inherent overhead,” Burke said. “So when it comes to throughput — things like running a large database — it’ll actually be slower.”

The trade-off is between determinism and loss of throughput.

“Throughput is the average highway you face today,” Burke explained. “If
what you want to get is the largest number of cars running over time, that’s
OK, but if it’s important that certain cars can get through quicker, a high-speed lane, that’s what the low-latency part of the kernel is all about.”

News Around the Web