Intel’s Gelsinger Mulls Past Missteps, Future Plans

Pat Gelsinger
Pat Gelsinger
Source: Intel

SAN FRANCISCO — On July 18, 1968, nine engineers from Fairchild Semiconductor, including Gordon Moore, Robert Noyce and Andy Grove, set up a company in Santa Clara initially called “Moore Noyce”. However, the name sounded a lot like “more noise,” so they ultimately settled on “Intel”, short for “Integrated Electronics”.

Today, the company that was almost known as “More Noise” is a $40 billion industry titan, with 85,000 employees and products found in everything from computers to cars.

With Intel’s 40th anniversary coming up, Pat Gelsinger, the company’s first CTO and the engineer who lead development of the 80486 chip, spoke to reporters today, offering a look back at several key events that shaped major changes at Intel — and across the industry.

The first, he said, was the introduction of the 80386 chip in 1985, inaugurating the launch of 32-bit computing.

“I remember being beat up by numerous press people over comments like ‘Who could possibly need 32-bits? That’s a minicomputer,'” he told a gathering of reporters. But it also was the era that saw what became known as the “PC architecture” being defined, thanks to Compaq releasing its first desktop computer.

Then, in the 1990s, came the RISC vs. CISC battle between Intel’s CISC-based (short for “Complex Instruction Set Computer”) designs and RISC (or “Reduced instruction set computer”) vendors like MIPS and Sun Microsystems. The momentum of x86’s platform was what saved it in the end, he said.

“The lesson was even if something is better, unless it’s better in a sustained way, people can’t create an architecture for it,” Gelsinger said.

Intel’s third major turning point came in 2000. Until then, the company had been focused on increasing the number of transistors per square inch, as per Moore’s Law , while also increasing their frequency.

Yet company realized it could not make processors any faster, and multi-core designs emerged as the solution.

“At the time, we had been scaling frequency and following Moore’s Law, but also following a 50 percent increase in frequency,” he said. “I even made a bold prediction about reaching 10 gigahertz. Eventually it will come true, but not by about 20 years.”

Gelsinger admitted Intel was late in making that turn from clock speed to cores. “We tried to exercise one generation too long of traditional scaling and failed,” he said.

Despite all of the hurdles, Intel has come out on top in large part thanks to the fact that its architecture has remained the same.

“There have been a zillion different architecture discussions over this 40-year period of time,” he said. “Over that period of time, architecture compatibility and continuity trumped every other aspect of technology.”

Looking ahead

Gelsinger didn’t spend all of his time dwelling on the past, however, making a number of predictions on where technology is going — and how Intel plans to cash in.

For starters, Moore’s Law appears safe, he said — at least for the next decade as Intel moves to a 32nm manufacturing process, then 22nm, 14/15nm, and finally, 10nm.

Gelsinger said Intel is always planning about a decade in advance. But those plans also could mean consolidation in the chipmaking industry.

Gelsinger said the world’s largest chipmaker also plans to change the size of its wafers, the discs on which processors are mass produced — shifting from 300mm to 450mm.

The reason is that the cost savings from adopting larger wafers is huge — 40 percent. While no company can ignore that kind of manufacturing cost advantage, it’s also extremely expensive to make the conversion. So it becomes a costly arms race among chipmakers, and not everyone is expected to survive.

After all, when the industry’s manufacturing earlier moved from 200mm to 300mm, independent foundries underwent a period of consolidation. Gelsinger expects similar results from Intel’s upcoming roadmap.

“Companies not spending at least a billion a year on infrastructure are falling off at a rate of Moore’s Law,” he said.

A second major trend Gelsinger sees is that multi-core’s rise will bring teraflop computing to the mass market.

“Our task now before us is to crack terascale computing at everybody’s desktop level,” he said, adding “It’s not going to happen with a broad range of programmers. We need a new language to abstract those problems from the users.”

Gelsinger also sees a future in which compatibility will still reign — in the form of Intel Architecture being deployed everywhere, from milliwatt systems using embedded chips like Atom to petaflops in supercomputers.

And in a future that stars Intel technology in everything from handheld devices to servers, Gelsinger said the company expects to bring connectivity to every human on the planet, in everything they do.

But to execute on such a lofty vision, Intel needs to stay sharp. It underestimated AMD (NYSE: AMD) and was caught flat-footed three times earlier this decade — AMD was first to market with a dual-core processor, 64-bit processors, and on-CPU memory controllers.

Gelsinger said the company was not going to make the same mistakes again.

“We’ve retooled our development processes with the ‘tick-tock’ model,” he said, referring to the two-year, two-stage process of creating a new architecture generation (the “tick”) and then delivering a performance improvement in the following year (the “tock”).

“We’ve redone the competitive analysis process and how we work with our customers and laid out an agenda to build on our products and strategy,” he added. “We feel good about that as a strategic vector.”

News Around the Web