As Web services
That was the consensus of experts at the recent Web Services on Wall Street conference in New York. They say new standards are needed if the IT community is to successfully build on top of Web services’ XML
“You can’t argue with SOAP [simple object access messaging protocol]
One significant effort to address interoperability is currently in the works at the OASIS XRI Data Interchange (XDI) standards committee.
The just-formed group is working on a new security spec, which will enable the automatic exchange of Web documents. In addition, separate projects are seeking to establish Web-service standards related to grid computing.
With so much simultaneous activity, some wonder if the Web services arena isn’t in danger of getting ahead of itself. “It’s important that we evolve the standards base deliberately and slowly,”
warned Sonny Fulkerson, a senior technical staff member at IBM.
Fulkerson sees a Web-services landscape where “we’re all starting to hover around the same logical architecture.” Here, the “enterprise server bus” serves as the foundation for handling XML messaging data and services over HTTP.
In the near-term, Fulkerson sees the focus on addressing services management issues. “We’ll start to see new products coming out such as provisioning and scheduling engines, which will be able to manage the infrastructure of grids,” he said.
“The past few years have been about building and deploying Web services,” agreed Dmitri Tcherevik, divisional vice president at Computer Associates. “The problems we’re facing now are management and security.”
A major challenge is making sure transmission rates are sustainable. The reason that’s an issue is because of the computing costs associated with the main medium of interchange in Web services — namely, XML itself.
“The reality is, XML is expensive to process,” said IBM’s Cohen. CPU power must be applied to decrypt and analyze XML messages as they’re received; these must subsequently be reencoded and reencrypted before they’re sent on their way.
Tcherevik noted that the XML processing penalty makes it difficult to deliver Web services in real time. He sees hardware acceleration as the end-run around potential slowdowns caused by XML handling delays. (Via its partnership with
Datapower Technology, Computer Associates sells a hardware accelerator that hooks in to the network gateway to speed encryption, signing, and decompression of XML.)
Others see the solution in grid computing, which helps speed Web services by grabbing processing power from underutilizing machines on the network. “What we want is resource fungibility — we need to generate CPU power on demand,” Cohen added.
However, to date that’s been difficult to achieve. “Too much hardware runs underutilized because it’s been too difficult to reprovision it,” Cohen added. “The problem is how we communicate requests from systems and make them actionable, and so far that isn’t possible unless we write very specific code.”
But improvements are in the offing. In terms of hardware, Duncan
Johnston-Watt, chief technology officer of Enigmatec Corp., sees grids being spurred by the rising deployment of “Lintel” (Linux on Intel hardware) blade servers.
On the software side, grids are becoming enabled by newer software, which supports dynamic resource allocation–the ability to create collaborative networks on demand.
Here, Cohen points to the work of the Distributed Resource Management Application API Working Group (DRMAA), which will make it easier to divide up a single computing task onto multiple systems.
Other experts note that today’s grids won’t remain static. Millions of new nodes are coming on line — many in the form of occasionally connected wireless-computing users.
One result, according to Hal Stern, vice president and chief technology officer of Sun Microsystems’ Sun Services operation, is that the line between Web services and the things we call grid computing are going to blur.
“The application development and deployment pattern is changing,” he added. “We’re still early in this.”