AMD to Facebook: It’s the Benchmarks, Not Us

AMD, Intel and Facebook

A Facebook exec caused a bit of a stir at last month’s GigaOm’s Structure 09 conference when he complained that the newest generation of processors from AMD and Intel were not living up to their performance billing.

“The biggest thing … was less-than-anticipated performance gains from new microarchitectures, so new CPUs from guys like Intel and AMD. The performance gains they’re touting in the press, we’re not seeing in our applications,” Jonathan Heiliger, Facebook’s vice president of technical operations, said at the time.

That had to hurt AMD (NYSE: AMD), a major sponsor of the show. Margaret Lewis, a product marketing director at AMD, said in a blog post, “It was hard not to squirm in my seat.”

Facebook has declined to comment further on Heiliger’s statement.

Intel (NASDAQ: INTC) responded in a comment tinged with a bit of sarcasm: “We understand that Facebook would like to have processors and platforms with unlimited performance, in tiny form factors that are close to free,” an Intel spokesperson said in an e-mail to “We will not ‘unfriend’ Facebook simply because they want tomorrow’s products today.”

Now AMD has issued a formal reply for the whole world to see from its chief marketing officer, Nigel Dessau, via his own blog post.

“As an industry, we need to accept that he makes some good points,” he wrote. “While in raw, classic benchmark terms, we continue to deliver great leaps in performance, I suspect the Facebook IT and programming model doesn’t look like ‘classic benchmarks.’ It’s probably more PHP and Java than C++.”

He also defended himself from the inevitable accusation that AMD is hiding behind benchmarks as an excuse, and that the industry’s commonly cited benchmarks often don’t reflect real-world uses.

“For hyperscale datacenter customers — customers who build massive server farms that typically power cloud environments — when a benchmark is a tiny bit off compared to real-world implementations, it can get magnified, a lot,” he wrote.

Dessau also discussed how benchmarks don’t accurately reflect real-world usage models or all of the potential uses out there, and that there should be more accurate benchmarks to take into account different usage models.

All well and good, but as Mercury Research President Dean McCarron noted, if you’re using an interpreted language like PHP, it’s going to be slower than a compiled C++ application.

“If performance matters to you, you don’t code in an interpreted language,” he told “This was established back when I started coding in the ’70s and ’80s. If you want to run fast, you don’t code in an interpreted language.”

As an interpreted language, PHP has to be executed through a runtime , which is the equivalent of a real-time compile. That will make it inherently slower than C++, a language already compiled down to machine code and running at top speed. Most benchmark applications are written in C++, so those applications will always run faster than any PHP application.

McCarron said Dessau’s response was legitimate, since you are going to get different performance results based on the language used.

“I’ve harped on that for a while, whenever someone calls me about server performance comparisons,” McCarron said. “It sounds like a cop-out, but it really depends on your workload. Every workload is unique.”

News Around the Web