The Top500 supercomputer list is computing’s game of leapfrog, where hardware vendors one-up each other in the performance space. IBM, which has long dominated the list, is about to put itself far, far ahead of anything ever seen on that list or promised.
IBM (NYSE: IBM) and the Department of Energy’s National Nuclear Security Administration will build a pair of supercomputers at the Lawrence Livermore National Laboratory, the same facility where its BlueGene/L is housed. BlueGene/L dominated the the list for years until last summer when another IBM supercomputer, Roadrunner, bumped it from the top spot.
Sequoia, due to be delivered in 2011 and operational in 2012, will be a 20 petaflop
When deployed in two years, Sequoia will represent a huge leap forward in supercomputing. The current top performer, Roadrunner, is an IBM machine at the Los Alamos National Laboratory that is just barely over the one petaflop limit.
Up to now, the most ambitious future design has been Pleiades, a supercomputer designed by Intel and SGI and deployed at NASA’s Moffet Field facilities. It will be a one petaflop machine when completed this year and NASA hopes to reach 10 petaflops by 2012.
Sequoia will be based on future IBM BlueGene technology and use 1.6 million IBM POWER processors and 1.6 terabytes of memory, which will be housed in 96 refrigerator-sized racks. Turek said the final specs on the processor have not been settled, so he could not say if it would use POWER6 or some derivative. BlueGene/L, for example, uses the much older PowerPC 440 processor but achieved its performance through scale.
No mixing of chips
He did say that there would be no mixing of chips, like Roadrunner, which used a combination of AMD (NYSE: AMD) Quad Core Opteron and IBM Cell processors. All of the towers in the Sequoia system will be front ended by another computer to do administrative functions, like file system management, leaving Sequoia to do one thing, crunch numbers.
Penguinistas will be happy to know Sequoia will run the Linux operating system, heavily modified for massively parallel and scalable computing. If need be, the computer will be able to bring all 1.6 million processors to bear on a single task, or the system can be partitioned and run multiple jobs at once.
The quantum leap in performance doesn’t just come from the chips, but also a system on a chip design, more cores, more memory and more interconnects to help improve chip-to-chip communication, said Turek.
“There are some advances made in networking that help facilitate a much higher profile of scalability than we’ve seen in the past,” he told InternetNews.com. “Most people don’t pay attention to networking, but things in the future will be more oriented toward memory, network and software.”
Turk admits a computer like this won’t sell a whole lot of units, but the single tower versions of Sequoia do have potential. “The system is physically decomposable. When I look at the 200 teraflop single rack version, I think there will be a lot of people looking to buy that,” he said.
Sequoia will be employed in atomic weapon stockpile stewardship, doing simulations on how the weapons degrade. The old method was to take the nukes into the desert and blow them up to see if they fizzled. It will also do a lot of research in base science like material science and fluid dynamics, all of which can have applicability outside of weapons programs.