At one time, complex computing tasks — from designing safer and more efficient automobiles to forecasting weather and seismic
“Grid computing is a method of harnessing the power of many computers in a network to solve problems requiring a large number of
Traditional supercomputers are single systems with large numbers of processors, enormous amounts of memory and performance that is
On the other hand, grid computing is a type of networking that harnesses the unused processor cycles of computers in a network
One of the most well-known grid computing projects is SETI@Home, in which PC users worldwide donate their unused processor cycles to
While SETI@Home is a non-profit project, commercial interests have also begun to take an interest. Juno Online, now a part of United
Other firms are also getting into the act, including supercomputing mainstay IBM. In December, Big Blue sealed a deal to provide a
INT Media Group launched GridComputingPlanet.com to help the technical community stay abreast of developments brought about by the
“Grid Computing Planet will become the gateway to grid computing and help solve problems that are beyond the processing limits of
activity to researching drugs and gene sequencing — required a mainframe computer. Now dubbed supercomputers, these machines are
made by the likes of companies like IBM Corp. and Seattle’s Cray Inc.
which said Thursday that
it shipped its first Cray MTA-2 supercomputer system in late December.
Now a different method of performing those complex calculations is beginning to gain clout in the commercial world: grid computing.
processing cycles and involving huge amounts of data,” said Alan Meckler, chairman and chief executive officer of InternetNews.com
parent INT Media Group, which Thursday launched GridComputingPlanet.com, a Web site dedicated to coverage of the grid computing
industry. “Rather than using a network of computers simply to communicate and transfer data, grid computing taps the unused
processor cycles of numerous — sometimes thousands of — computers.
measured in gigaFLOPS or even teraFLOPS. Needless to say, these machines
are expensive and require top-notch technical expertise to maintain. For instance, IBM’s ASCI White supercomputer is rated at 12
teraFLOPS and costs $110 million.
(including lowly PCs) for supercomputing tasks.
analyze radio signals from outer space for signs of extraterrestrial life. Volunteers simply download a screen saver from the
project and their processing power is used to analyze information when the screen saver is active. SETI@Home says that by harnessing
volunteers’ unused processor cycles it has achieved about 15 teraFLOPS for about with about 3 million volunteers. It says the cost
has been about $500,000 to date.
Online, latched onto the idea last year, dubbing it
the Juno Virtual Supercomputer Project. The company viewed the virtual supercomputer as a way of monetizing its free subscriber base
by selling supercomputing services to research firms. In May of last year, the company secured its first contract when bioinformatics incubator
LaunchCyte LLC signed a letter of intent for use by it and its portfolio of companies.
traditional parallel processing system to the University of Texas for Austin’s advanced computing center (TACC). TACC will use the
system to test computing grids, and IBM has long maintained that grid computing will drastically change computing by enabling
heterogeneous systems to share resources over the Web.
emergence of grid computing. As part of that effort, the company also announced the launch of Grid Computing Planet Spring 2002
Conference & Expo, which is slated for June 17-18 at the DoubleTree Hotel in San Jose, Calif.
individual computers, as well as being a resource center for the technical community, online and offline,” Meckler said.