Behind Intel’s Research Curtain

MOUNTAIN VIEW, Calif. — Intel showed off 70 different research projects at its “Research Day” here at the Computer History Museum. In the past, the annual event was held at the chip giant’s headquarters in nearby Santa Clara, but the new venue offered more space.

Justin Rattner, Intel’s Chief Technology Officer, kicked off the event Wednesday morning with a keynote address focused on Intel (NASDAQ: INTC) research efforts in recent years. An exhibit area of different projects was set up in a nearby hall. Ironically, any one of the 70 exhibits almost certainly featured more computing power than the entire floor of early computers on display in the floor below as part of the Museum’s collection.

In a Q&A session after his talk, Rattner indirectly addressed a recent controversy with graphics chipmaker Nvidia. The company’s CEO, Jen-Hsun Huang, has been pushing the idea of GPUs taking a more primary role in future computers that will feature more “visual computing”.

“We have nothing against GPUs. We probably build more of them than anyone else,” said Rattner. But Intel’s long term view is that traditional raster graphics driven by the GPU are “problematic.”

He said Intel thinks a new architecture based on aggressive
‘many core’ processors
“will deliver a vastly better visual chip.”

Rattner said the first example will be Intel’s forthcoming Larrabee
architecture
, a many-core design for visual computing he said will be previewed at the Siggraph
conference
in August.

On the exhibit floor

A big theme at many of the exhibits: energy and power savings.

One exhibit showed “Power Clamping” technology designed to help IT departments limit how much power is used in the datacenter. “You can look at a typical workload and assign a budget and say, for example, the system can’t draw more than 340 Watts,” explained Intel researcher Paul Smith. If servers demand more power, performance basically slows down to secondary systems to stay under the preset power limit. You can also have alerts automatically sent to relevant IT staff when certain power limits are challenged.

IBM LTO

An Intel researcher explains platform power management.
Source: Intel

The official name for the technology is Dynamic Power Node Manager and it’s expected to be available in the “Nehalem”
server architecture
due out late this year.

On the smaller scale power management side, Intel also showed Platform Power Management (PPM) technology designed to reduce power consumption in a range of computer systems from mobile computers and desktops up through servers. In a demo, an onscreen power meter showed two notebooks with and without the technology. The one with PPM was consistently saving between 30 and 40 percent power by monitoring and shutting down parts of the system not in use.

Intel researcher George Allison said PPM is very different from the ACPI power management spec that turns off peripherals when not in use. “ACPI’s been around a long time and is tied to the operating system,”
said Allison. “We can get additional benefits by leveraging hardware.”

George Goodman, a director at Intel’s Architecture Lab, said PPM doesn’t affect performance. “It’s more like having islands of control” that work automatically. “There’s no user interaction required,” he added. “This could do a lot to extend battery life.”

PPM will start to appear in some products next year. “By 2012 we expect it to be across all our platforms,” said Goodman.

Page 2 of 2

From Research to Reality

Rattner noted that while Intel’s latest chips have received plaudits for energy-saving features, it took years to convince senior management to pursue the technology. The recently introduced, ultra low-power Atom
processor
has its roots in research Intel started in 1999. At the time the company was looking at cutting the power drain on its desktop Pentium line down to a few Watts – a tiny fraction of what the processors then required. The idea was “reviewed with senior staff, but didn’t make the cut,” said Rattner.

Another effort was made in 2002 at Intel microprocessor research in Austin, Texas with “Snocone”, an ultra-low-power processor based on Intel architecture, designed for an emerging class of ultramobile computers. But work on what would become the Atom, didn’t really begin until 2004. “It’s a clear example of a long term persistent research effort ultimately having a big payoff,” said Rattner.

Similarly, he noted that WiMax, the technology that enables
long-distance wireless broadband, “didn’t just fall out of the sky. In 1999 Intel Architecture Labs developed a vision for fixed, portable and mobile [networks] superior to cellular. ”

But he conceded Intel ultimately went outside the company to acquire technology that became WiMax.

Paraphrasing a quip made famous by former Sun exec Bill Joy, Rattner joked “most of the smart people work at some other company.”

Get the Free Newsletter!

Subscribe to our newsletter.

Subscribe to Daily Tech Insider for top news, trends & analysis

News Around the Web