With the number of chips per server and cores per chip increasing, future generations of servers may end up with way more processing power than the computer could possibly utilize, even under virtualization, Gartner has found. The research firm issued a report on the issue earlier this week.
This doubling and doubling again of cores will drive the servers well above the peak levels for which software systems are engineered. That includes operating systems, middleware, virtualization tools and applications. The result could be a return to servers with single-digit utilization levels.
The problem is that the computer industry resides and depends on constant upgrades. It’s not like consumer electronics, for example, where stereo technology remained unchanged for decades. The computer industry is driven by Moore’s Law
“Their whole business model is driven on delivering more for the same price,” said Carl Claunch, vice president and distinguished analyst at Gartner. “They have to keep delivering on the refresh rate, and you have to be constantly delivering something new.”
And fast chips are more glamorous than working on the subsystem, which has lagged when compared to processor performance. Memory and I/O buses are much slower than the CPU, causing bottlenecks on a single PC. On a virtualized system, it can be even worse.
So with Intel (NASDAQ: INTC) flooring the gas pedal on driving new products, vendors like IBM (NYSE: IBM), Dell (NYSE: DELL) and HP (NYSE: HPQ) have no choice but to follow to get revenue from product refresh sales. “When someone does take their foot off the gas it will be a train wreck, because so much is dependent on that rate of refresh and speed of improvement,” said Claunch.
Ed Turkel, manager of the Scalable Computing & Infrastructure unit at HP, seemed to concur. “Due to the more compute power available with multi-core systems, the applications may need to be re-implemented to fully take advantage of the compute power available to them,” he said in an e-mail to InternetNews.com.
“This issue is commonplace in high performance computing today, but we will start to see this as an issue in other segments. For instance, virtualization environments will also need to become more multi-core-aware, perhaps creating virtual machines that virtualize multiple cores into a single machine that hides this added complexity.”
Sockets, chips and cores, oh my!
Currently, the most popular server motherboards have two to four sockets, with dual socket being the most popular, according to Intel. Anything above four sockets is labeled as a “multi processor” (MP) server, but those are very rare and only used in extremely high-end systems, accounting for single-digit market share.
It gets even more confusing on the processor side, as the return of multithreading in Intel’s Core i7 (“Nehalem”) means one core appears as two when running two separate threads.
So far, Intel has launched the six-core Xeon and AMD has a six-core Opteron in the works. Intel plans for an eight-core Core i7 (“Nehalem”) for servers, which will run two threads per core, and AMD is planning for a 12-core server in 2011.
If motherboard makers start going to 8, 16 or 32 socket motherboards, it could be possible to see 256-core machines. With 12 and 16-core processors, that could hit 512 cores, and so on in the coming years.
Multi-processing and parallel processing is not an easy science, as many engineers are finding out in recent years. Parallelization has not kept pace with the multi-core race, and Gartner said that organizations need to be aware of this growth in cores because there are hard limits in software.
Next page: An explosion of parallelism
Page 2 of 2
An explosion of parallelism
“You can go up and down the line and see the problems are all in software,” said Claunch. “We are going into an explosion of parallelism, and we had not grown at that rate in the past. The same piece of software might scale well up to 16 processors, but after that it’s too bottlenecked and it might be significant changes to make it scale.”
Microsoft has gone on record as stating that Windows 7 and Windows Server 2008 R2, as well as the next version of SQL Server, currently under development under the codename “Kilimanjaro,” will support up to 256 cores. Thus far, Microsoft is the only software developer to commit to such scalability.
Claunch applauded that effort, as well as those efforts by VMware and Linux developers to support massively scalable systems, but said the software companies simply can’t keep up with Intel and AMD’s rate of innovation.
“They need to put significantly more resources into showing the software is going forward,” he said. “One of the major computer science research projects now is asking if we need entirely new ways of writing programs now. That’s why you see big amounts of money by Microsoft being invested to research this at a theoretical level.
“We’re at the point of exploring the size of the problem and we don’t have a glimmer that says here’s the solution. We’re still seeking a solution.”