RealTime IT News

Getting on the Grid

With a whiff of bravado and a touch of retro, IBM has announced an approach to networked computing that could dramatically alter how corporations allocate computing resources for large-scale projects.

The current network model prevalent in the corporate world is to distribute resources as needed, combining desktop power with servers for specific functions, such as printing or Web serving. For users, their main computing power sits on their personal machine. Larger-scale projects will tend to centralize resources on a mainframe or a supercomputer.

What IBM has announced treats computing power less as a function of the local machine and more as a function of the extended network. IBM uses the analogy of an electrical grid to describe its plan. When you plug in your refrigerator, you don't need to worry about the electrical generators that create the electricity. Similarly, if you need loads of computing power, you should be able to tap into those resources without worrying about how those resources are generated. Simply plug into the grid and compute away.

Remotely Deja Vu
Now, for those who remember playing The Oregon Trail over a teletype machine, the idea of transparently tapping remote computing resources may not sound all that new. The notion of a grid-based virtual-computing system have been floating around the supercomputer world for years now. And distributed-computing schemes are not new, either: witness the SETI@Home program, which uses unused CPU cycles on networked machines to search for life on other planets.

"Grid computing presents the next big evolutionary leap for the Internet," says David Turek, IBM vice president for Linux Emerging Technologies. "If we think of the Internet today, we think of mail, instant messaging, and Web serving -- it's wonderful for content distribution. Grids really extend the notion of virtual computing substantially, as they unite far-flung computing resources and craft a virtual computing environment."

This uniting is already underway, as IBM has already won a contract from the British government to provide key technologies within the "National Grid," a massive network of computers distributed throughout the United Kingdom. IBM is building a Grid system at Oxford University, where it will used to store and process high-energy physics data. The grid will be connected to the U.S. Particle Physics Laboratory in Chicago and the new Large Hadron Collider at CERN, the European particle physics laboratory in Geneva, Switzerland.

Once the Grid is complete, scientists all around the United Kingdom will be able to access data to collaborate remotely on CERN projects. For example, using the National Grid, scientists in a lab in Cambridge will be able to run sophisticated high-energy physics applications on computers in Belfast.

IBM is tackling a similar project in the Netherlands. In addition, IBM Research built its own Grid -- a geographically distributed supercomputer linking IBM research and development labs in the United States, Israel, Switzerland, and Japan.

Brute Force Networking
Retro? A little. The ability to run applications remotely has been the mantra of the X Window System, where MIT programmers created a network-transparent for running applications across the network. (X is the basis of graphical interfaces used in Linux and most UNIX systems.) What IBM is proposing is to take this transparency to the next level, totally abstracting the differences between machines, and to make this transparency and power available on demand if need be.

IBM is also rolling its Project eLiza and existing operating systems into the larger Grid initiative. eLiza is a program for delivering self-managing systems and technologies with enhanced performance and security. Grid protocols will be incorporated into eLiza offerings, while Linux and IBM AIX are already capable of supporting and accessing Grid projects. Irving Wladawsky-Berger, architect of IBM's Internet, Linux and Project eLiza strategies, will also lead IBM's Grid Computing Initiative.

IBM is turning to the Open Source community to help implement the project, enlisting the Globus project to develop protocols and technologies. Begun in 1996, the Globus project is largely an academic project designed to implement virtual computing across distributed machines. The source code for Globus is available for anyone.

So what practical applications will we see? Despite the use of the electrical-grid metaphor, this technology initially isn't meant for the casual user who wants to crunch some Excel spreadsheets, nor is it meant for typical network based on print and file services. It's initially meant for workgroups who need brute force to complete their work, such as auto and airplane designers, scientific researchers, or drug-research firms.

"The real market are virtual organizations: companies that come together in a grid to collaborate on a problem," Turek says. "A lot of these will be driven by people with intense computing needs, like earth sciences or weather forecasting."

Bonuses for Big Blue
What does IBM get from this new plan?

"In the strategic sense, we get the opportunity to provide a service to customers," Turek says. "Today, if you want to compute, you buy a computer. In the future, you can do computing by dialing into a service." In addition to the aforementioned UK and Netherlands projects, the U.S. National Science Foundation has made funds available for development of an infrastructure that may result in system deployments as well, while the Department of Defense is considering grid projects. In addition, NASA already deployed grids internally.

And, perhaps more pragmatically, IBM will look to work with resellers, such as current Application Service Providers (ASPs), who can offer grid services on their own.

"This technology will allow a lot of other companies providing similar services -- such as servers, storage, middleware -- everything they need to sell grid services on their own," Turek says. "A market segment will want to build grid for their own use, so we will sell them services and technologies under their own initiatives.

"ASPs are used to delivering applications on demand, and we'll work with them to make that evolutionary step to delivering computing power on demand."

This article was reprinted from CrossNodes, an EarthWeb Network site.