Two government-sponsored supercomputers are getting a boost from the best-known names in Silicon Valley.
NASA Wednesday said has installed the Hewlett-Packard
AlphaServer SC45 supercomputer for climate research studies into environmental issues such as global warming.
Information technology services firm Computer Sciences Corp. (CSC) installed the HP system at NASA’s Center for Computational Sciences at the Goddard Space Flight Center in Greenbelt, Md., in the first stage of a two-year NASA contract. The AlphaServer system, running the Tru64 UNIX operating system, includes more than 500 Alpha 1-gigahertz processors.
Later this fall, the AlphaServer supercomputer will be expanded to more than 1,300 processors with the addition of more than 800 1.25 GHz Alpha processors. When the project is complete, the AlphaServer SC45 system will deliver as much as 3.2 TeraOPS (trillions of operations per second). The SC45 supercomputer will use 8 terabytes of HP StorageWorks fibre-channel based storage. Palo Alto, Calif.-based HP said it would maintain a services and support crew for the system.
In second HP AlphaServer SC45 system, this containing a 32-processor, has been installed by CSC at Columbia University in New York and is being used for complementary environmental research at the Goddard Institute of Space Studies.
The environmental research project contract, which is valued at $17.5 million, represents the initial milestone in a six-year NASA plan to provide integrated high-end computing resources that will support NASA’s Earth and space science research community.
“NASA scientists sought to improve their climate modeling and simulation capabilities,” said CSC vice president Bob Scudamore. “This objective drove the requirements for greater computational power, memory and data storage. With this new technology in place, NASA scientists will be better able to understand the Earth’s systems and improve our predictions of climate, weather and natural hazards.”
Other large AlphaServer supercomputers are being used by the Pittsburgh Supercomputing Center, which is using more than 3,000 Alpha processors for the world’s largest non-military supercomputer used for open research; the French Atomic Energy Commission, which has the largest supercomputer in Europe; and the Australian Partnership for Advanced Computing, which has the largest university supercomputer in Australia.
HP said it is also is building a 30-plus TeraOPS AlphaServer system for the Department of Energy’s National Nuclear Security Administration to simulate nuclear testing.
Meantime, the DOE said this week that it has tapped Salt Lake City-based Linux NetworX to build the world’s fastest Linux supercomputer at its Lawrence Livermore National Laboratory for to support the lab’s national security mission.
When it comes online this fall, the Intel
-based cluster is expected to be one of the five fastest supercomputers in the world.
“A machine of this size is very complex to integrate and manage. The partnership between Linux NetworX and LLNL is essential to the success of this endeavor,” said Dr. Mark Seager, LLNL’s Asst. Dept. Head for TeraScale Systems. “This Linux NetworX system will significantly expand the computing resources available to Livermore’s researchers. We are very excited about the unclassified scientific simulations that will be accomplished on this world- class Linux Cluster.”
The cluster will harness 1,920 Intel Xeon processors, specially designed by the Santa Clara, Calif.-based chip making giant, at 2.4 GHz with a theoretical peak of 9.2 teraFLOPS, or 9.2 trillion calculations per second.
That’s seven times more powerful than Deep Blue, the IBM
computer that beat world chess champion Garry Kasperov in 1997. The cluster could hold the entire Library of Congress in memory four times.
The Super-Linux system is taking advantage of LinuxBIOS, an open BIOS alternative that can boot nodes, is remotely manageable and is designed specifically for cluster systems; ICE Box, a Linux NetworX appliance designed specifically for management of Linux clusters; and Sub 1U Evolocity II, a double-density node design making its debut with this cluster.
NetworX is also in co-development with LLNL for SLURM (Simple Linux Utility for Resource Management). SLURM is an open source resource management system developed for Linux clusters that focuses on portability, interconnect independence, fault-tolerance and security.