Intel has taken the wraps off its entry into the graphics processor market, offering up the first technical hints of “Larrabee,” its attempt to take on nVidia and ATI/AMD in the highly competitive graphics processor space.
In a briefing with journalists ahead of its presentation at the Siggraph show later this month in Los Angeles, three engineers on the team went into great technical detail on the structure of the chip, but declined to give much product specification.
What they would say is that initially, Larrabee will be available as an add-in card, just like nVidia (NASDAQ: NVDA) and ATI (NYSE: AMD) cards are now. Other potential uses or the form factors of the cards were not discussed. Product won’t appear on the market until late 2009, if not 2010.
Larrabee is built on the old Pentium technology, but heavily modified and modernized for graphics processing. Intel (NASDAQ: INTC) would not say how many cores would constitute a Larrabee processor, beyond the nebulous promise of “dozens.” The cores all communicate through a wide “ring” bus that allows for fast inter-core communication and sharing of data, as well as sharing cache data. The L2 cache is partitioned among the cores, allowing for data replication and sharing.
Each Larrabee core is a complete x86 core capable of context switching and preemptive multitasking with support for virtual memory and page swapping. The main difference between Larrabee and other GPUs is that Larrabee will be much more flexible in the steps through what graphical data is processed.
Existing GPU architectures require data to be passed through a battery of processors, from a vertex shader to a pixel shader to a rasterizer, even if a particular processing job doesn’t require it. As such, Intel believes that there is no such thing as a “typical” workload.
“The problem with designing a GPU is how much performance to put into the different segments to balance out variations in the load,” said Larry Seiler, senior principal engineer in the visual computing group at Intel.
So Intel’s solution is to not have fixed function stages in the pipeline.
“You are entirely in control of how processing happens,” said Seiler. “You can change scheduling, how each stage is handled, you can modify it to handle the characteristics of your workload, or change the rasterizer for your workload.”
However, Jon Peddie, president of Jon Peddie Research, said Intel’s solution is not necessarily better nor worse. “There are fixed functions and logical steps one has to go through in a GPU, but those things are in there for a very good reason. Those are the most efficient ways to do graphics programming,” he told InternetNews.com.
What Intel is doing is adapting its strategy, the x86 architecture, to graphics, Peddie added. “This is really a multi core CPU. What makes it different from the x86 we are using in our computers is this ring communication for interprocessor communications. That is one of the main differentiators between Larrabee and Nehalem.”
That said, Peddie thinks the “ring” for inter-core communication is a big revolution. “I think it’s fantastic that Intel has done this, because this is the first innovation in computer graphics architecture since the GPU was introduced almost ten years ago. So they get a lot of credit from me for being brave enough to do it,” he said.
The ring “gives you a really fat communication path for every processor to talk to every other processor. That’s something they have that neither ATI nor nVidia have,” said Peddie. “nVidia and ATI have an order of magnitude more processors, but are built in groups or gangs and communicate from group to group. So processor 004 can’t talk directly to processor 794.”
Seiler said that the flexibility of Larrabee is not limited to the hardware, but the software as well. “If a developer finds something in the API that limits them, they can create their own,” he said. “We want to insure developers the freedom to run in Larrabee as they need.”
That also means possible forking as developers improvise their own fixes, the same problem that made Unix so incompatible after many years of proprietary fixes. Intel is aware of that. “We want to give them freedom but we are wary of the potential for splintering. So it’s a balancing act,” said Seiler
Intel has been heavily romancing major computer graphics experts at universities all over the world and all of the major game developers. The paper being presented at Siggraph, along with many Intel engineers, includes Stanford engineers as contributors as well as Mike Abrash, one of the best known gaming graphics programmers.
So Intel is making a full court charge on Larrabee, a big change from its less-than-stellar integrated graphics products. “Don’t judge Larrabee by Intel’s current graphics products,” said Peddie. “[Intel CEO] Paul Otellini has taken the handcuffs off the guys at Intel who know how to do graphics.
Not only has he taken the cuffs off he’s given them the checkbook to get some staff and IP behind it. As a result, Intel is going to do it right.”