The CTO recently discussed the company’s utility computing strategy, as well as what affect Sun’s recent rapprochement with Microsoft might have. Q: What challenges do you face in your current position?
The second thing you need is a risk or change management architecture — that’s where I think the preventative services fit in, so you could evolve this over time at a very low cost and a very low risk. The third thing is, if you’re going to change your demands on it and you’re going to assemble it on the fly, you need a financial model that fits that architecture and that’s where we see utility computing architecture coming in. You need a financial model based on a technical architecture and if you try to do one without the other what you get is a large outsourcing contract. Utility computing is something to build on — an architecture for a much more creative financing model, and then preventive services to be able to get out of the “insurance” game and get into the “assurance game.” Finally, if you’re going to do utility computing, you have to have the operational maturity to handle it and I think this is something we learned in doing data center consolidation.
When people hear ‘utility computing,’ IBM
likely come to mind, probably because they have thrown the most acquisition and marketing heft into a it.
But Sun Microsystems
has its own concept of how utility computing — where automated computing is provided on-demand and often billed on a metered basis — is supposed to work.
Sun Services CTO Hal Stern oversees the technological aspects of the company’s utility computing strategy and for persuading customers and partners to choose Sun over IBM or HP.
Stern isn’t your typical services executive. He began his career as a systems administrator for the Santa, Clara, Calif., about 15 years ago and has worked in a variety of capacities, mostly as a chief technologist.
His division experienced a changing of the guard this year, when Patricia Sueltz left as executive vice president of Sun Services for on-demand CRM provider Salesforce.com. Marissa Peterson, formerly chief customer advocate and executive vice president, took over.
The challenge that Marissa Peterson gives to me is: We’re going to scale services with technology instead of people. Don’t expect more heads. Go grow the business purely through the technology. There are lots of technologies where you can do that — remote services, management capabilities, service-level automation. But what that means is we’re going to build a
technological architecture that scales up and delivers revenue from Sun without: ‘A’ having architectural control because you need people to do that; and ‘B’ without adding a lot of people.
Q: How do you define Sun’s vision for utility computing services?
Utility computing is one of three legs in a stool of how you design a data center architecture. The first is an operational architecture. How are you going to most efficiently manage the resources that you have? Whether its managed services, or whether that’s just overall good design for system management and automation of thing like provisioning and virtualization.
Q: How do you apply the “stool” analogy to Sun’s approach to utility
When we say let’s go do utility computing, it becomes a question of No. 1: How do you architect services so that they’re assembled? Web services
it mean to add one to an application if you want to deploy one more? You have to be able to scale up and scale out so you have to know what it means when you want to add power units to your applications. What is Sun doing that’s different? I can scale up and out with the same architecture. It’s all JES [Java Enterprise System], it’s all Solaris, or Linux whether it’s
running on an x86 blade or an x86 rack-mount server or a high-end machine
Sun Fire 15K it’s all the same systems technologies.
Then how do I incrementally secure it? How do I have a networking architecture that allows me to assure that the networking infrastructure scales up as well as I virtualize things and deploy across servers? That’s not just a question of adding more switches, that’s also making sure you’re doing the right load-balancing, and have the right availability architecture. I think another major difference here is that we look at these things inherently as networking problems. It’s not about provisioning on the system, it’s not about resource allocation on the system — it’s about doing this across a number of systems that are networked together. The boundary is the data center, not the box.
Q: Compare Sun’s notion of utility computing to what IBM and others are creating.
Their attitude is ‘We’ll go build a financial model for you based on your current data center architecture. Just give us a data center and will pay you back for it. As it grows, we’ll send you additional bills.’ I think it’s very much what IBM is doing when they announce these 10-year deals. I think it’s a very dangerous proposition because you’re asking companies now to make bets and to find something that’s going to have a value proposition over 10 years and if you can’t find value in utility computing in a year or two years . . . I look at [IBM’s] on-demand as a large outsourcing contract. Where’s the technical architecture for on-demand? What’s sitting underneath it? They talk a lot about Eliza and other sorts of self-healing, autonomic computing technology . . . IBM has what? 120,000 employees in Global Services these days? And yet they talk autonomic computing. To me those things are at opposite ends of the equation. If it’s automatic, I don’t need a 120,000 people. If I have a contract for 120,000 people, they’re doing something. It’s hard to see how that fits in with conjunction with on-demand. Unless you can do some real careful architectural design that describes how the
applications work, how the applications consume resources, how you secure applications, I don’t see how you automate that process.
Q: So Sun have the architecture and process in place. How, then, are the contracts structured?
The customer can come to us and say, ‘We want the utility solution for us, so we can run our own utility because we want to better understand our capacity models and we want to work with Sun to figure out how to get our hardware acquisition cost down.’ Others say, ‘we want to work with a hosting partner because we want this run by someone else and we really want it to be completely invisible to us what the incremental hardware adds and subtracts are. I’ll go work with a hosting partner there.’ In both those cases that utility model is different from “OK, let’s go outside the data center’ because there is a lot of architectural control that goes into that to make sure the utility model doesn’t turn into a mass consumption model.
Q: How, if at all, does the recent agreement with Microsoft
affect what you’re doing with utility computing?
I’ll give you a practical example. We have not defined what interoperability in this case means. But let’s go back to the Web services example. You want to drive down your costs of assembly. You know that no matter what technology foundation you start from, you can glue the pieces together. We have to agree on how identity services talk to each other. We have to agree
on how security services talk to each other, which has been a real point of contention in the industry. So, the pact with Microsoft will help in the sense that things that tended to involve a lot of workarounds should work out of the box. How does that help utility? By driving out the one-off engineering and the costs of integration, it makes it easier for us to
replicate successes. I look at this as a big win because when you start to talk about interoperability and consistency in the platform it allows you to have conversations at a different level. Also, what Microsoft has agreed to on their end is that there is going to be a Java virtual machine pretty much everywhere now. By getting over that hurdle I don’t have to have that stupid argument of Solaris versus Linux versus Windows.
Q: If you had to make a sales pitch to people considering Sun’s utility computing strategy, IBM’s e-business on-demand or HP’s Adaptive Enterprise, what do you say to sell customers on choosing Sun?
No. 1, there is an architectural requirement. The architecture has to include scale up and scale out. It’s not about Linux, or about Intel, it’s about all of the different design patterns you would use in the enterprise. No. 2, it’s about using the right economy of partners. There’s no company going to do all of this. Therefore there is no clean sheet of paper. You have to start from where you are, you have to be able to integrate. No. 3, you have to be able to scale through technology and show return-on-investment in a short period of time. No. 4, you have to be able to understand your entire stack from the application level down to the metal level and know that you’ll be able to pull apart so you can figure out the right granularity for utility.
The CTO recently discussed the company’s utility computing strategy, as well as what affect Sun’s recent rapprochement with Microsoft might have.
Q: What challenges do you face in your current position?
The second thing you need is a risk or change management architecture — that’s where I think the preventative services fit in, so you could evolve this over time at a very low cost and a very low risk. The third thing is, if you’re going to change your demands on it and you’re going to assemble it on the fly, you need a financial model that fits that architecture and that’s where we see utility computing architecture coming in. You need a financial model based on a technical architecture and if you try to do one without the other what you get is a large outsourcing contract. Utility computing is something to build on — an architecture for a much more creative financing model, and then preventive services to be able to get out of the “insurance” game and get into the “assurance game.”
Finally, if you’re going to do utility computing, you have to have the operational maturity to handle it and I think this is something we learned in doing data center consolidation.