Video Cards Become Scientific Calculators

ATI Technologies and researchers at Stanford University have found a way to use the high-powered graphics processing unit (GPU) on video cards to perform the number crunching needed for a scientific distributed computing project.

The announcement is proof of concept for ATI , which is promoting the high-powered video cards as an alternative to CPUs for performing massive floating-point calculations.

Distributed computing projects have been taking place on the Internet since the late 1990s, as the advent of millions of networked computers made such efforts possible.

The concept is simple enough. Take a large computing task and slice it up among thousands of volunteers who leave their computers running, and when the PC is idle, the task crunches away in the background.

However, distributed computing projects have always used the CPU, not the GPU. In this case, Vijay Pande, the Stanford professor who created the Folding@Home project, said the GPU is actually better suited because the calculations for Folding@Home are all floating-point calculations.

Folding@Home simulates how proteins self-assemble, and tries to determine why there are errors in the self-assembly process. When a protein misfolds, it can lead to diseases like cancer and Alzheimer’s.

The amount of computing time needed to simulate this is “off the scale,” as Pande put it.

It can take a fast processor a full day to simulate a billionth of a second in the folding processes. A protein assembles in a second, so at 1 nanosecond per day, that means a billion days to simulate one assembly.

And that’s for small proteins. For larger, more complex proteins with 30 to 50 amino acids, it will take much longer and require a great deal more computing time. It can come to a point where a calculation would have to run three to five years, said Pande.

That’s where the GPU comes in. The ATI X1900 GPU has 48 programmable floating-point processors and 128-bit memory with bandwidth that’s beyond anything a CPU can do.

A combination of new algorithms and the power of the floating point processors on ATI’s X1900 line has allowed for a 20- to 40-fold performance boost from the hardware, and then a 10- to 15-fold boost on top from the new algorithms.

Pande estimates he can get up to 100 gigaflops out of an X1900 card.

“It’s hard for me to emphasize how significant that is for us,” he said.

“If you ask how much would a computer cost that’s 40 times faster, you can’t even buy that. You can build a cluster with maybe 200 computers — that might work. But there’s a lot of cases where even that would be insufficient.”

Initial tests on the x1900 bore this out, as Stanford was able to simulate three years of work in one month.

“You can get more compute power for certain apps with GPUs than you can with CPUs. A few very bright scientists have realized this,” said Jon Peddie, an analyst who follows the graphics market.

“GPUs have more floating-point power, so they will be an attractive processor solution for certain apps to the scientific community.”

Peddie said there has been interest in using GPUs for floating-point processing since they first hit the market, but the processors were not easy to design.

In recent years, programming languages for these processors has improved, and DirectX 10, due from Microsoft along with Vista, will make programming to a GPU as easy as programming in C.

One caveat, however, is that only only the X1900 and higher are truly capable of supporting Folding@Home’s computing, and those cards run $400 to $500 alone, rather pricey by computer standards, Pande said.

Not just that, but they are “total power suckers,” as Peddie put it, requiring power supplies of 750 watts and higher. “Now how does that line up with AMD’s green power marketing campaign?” Peddie joked.

Get the Free Newsletter!

Subscribe to our newsletter.

Subscribe to Daily Tech Insider for top news, trends & analysis

News Around the Web