Late week announcement made

Intel made a surprise announcement on Friday that it was basically canceling the current version of the Larrabee project, the discrete GPU that Intel has been talking about since early 2007. While the project lives on and the team continues to push for many-core x86 designs for GPUs in the future, it looks like that just got a whole lot later.
If you read PC Perspective or listen to our wildly popular podcast, you have heard the name “Larrabee” pop up more than its fair share.  That was the codename for the project Intel announced way back in February 2007that’s goal was to compete with the likes of NVIDIA and ATI in the world of discrete graphics.  Intel’s direction was somewhat revolutionary for a GPU in that it was going to use many small, but standard, x86 cores with custom vector processing units rather than traditional GPU shaders and depend on a dedicated software stack to make it all work. 

In August of 2008 we got our first juicy details about the architecture, how it would, what the software engineers had to undertake, etc.  The architectural preview we posted then will probably be of great interest to you, even today, so be sure to check it out. 

Larrabee canceled: Intel concedes discrete graphics to NVIDIA, AMD...for now - Graphics Cards 5
An early diagram of Larrabee

The highlights of the architecture included IA x86 cores with additional vector units, dedicated hardware for texturing (the only real dedicated GPU hardware on the chip), a large L2 cache and a ring bus memory controller.  We never got much more detail than this, even as months turned into years: nothing on core counts, frequencies, performance – nada.

So today’s announcement was not SO surprising.  To be fair, Intel is not canceling the entire Larrabee project, they are just basically admitting that the first iteration of this architecture was not going to be released in any retail form.  That means the chip will be relegated to power internal development systems at Intel and some trusted software partners machines as well to pave the way for future many-core software development.  If you were excited about the prospect of getting 48 x86 cores on an add-in card for your computer, you’ll be waiting quite a bit longer now.

Larrabee canceled: Intel concedes discrete graphics to NVIDIA, AMD...for now - Graphics Cards 6
Former Intel Exec Pat Gelsinger holding up a Larrabee wafer

With Intel officially calling this a “delay” rather than a scrubbing of the entire idea, we have to wonder WHAT would cause Intel to admit defeat now?  There are only a couple of possible reasons: hardware or software.  The hardware could quite simply just not have been fast enough and reasonable power levels for a consumer product.  Would Intel need more efficient x86 cores or did they realize too late that they needed more cores than they could fit on a die with today’s process technology?  The software was also mind-numbingly complex: creating a software renderer that would be backwards compatible with DX9/10/11 software and convert it to x86-ready code while being much more efficient than any other software renderer has been to date.  Did Intel’s software team just over promise and under deliver when it came down to crunch time?

Larrabee canceled: Intel concedes discrete graphics to NVIDIA, AMD...for now - Graphics Cards 7
Intel is trying to do what 3DLabs attempted with the P10 all those years back.  A fully programmable rendering pipeline.  AMD and NVIDIA currently use the DX8-DX10 style pipeline with programmable portions sandwiched between fixed function units (which is not necessarily a bad thing when talking about performance in these apps).

There are all questions that of course Intel wouldn’t reply to – only that a combination of hardware and software concerns caused Intel to delay the Larrabee project as a consumer product until sometime in the future.  You will likely hear NOTHING from Intel about this project until they have a firm grasp on the situation – it was the mouth of a former Intel exec that promised to “beat the GPU guys at their own game” that raised expectations to a fever pitch very early.  Obviously too early.  I wouldn’t expect to hear anything until IDF of 2010 – if even then.

Larrabee canceled: Intel concedes discrete graphics to NVIDIA, AMD...for now - Graphics Cards 8 
Single Larrabee core diagram

There a couple of groups really loving this news: NVIDIA and AMD.  Both of these companies have to be letting out a big sigh of relief knowing that Intel has officially gone back into caves of the graphics world for at least another couple of years.  For AMD that means that Intel will have no way to compete with Fusion (GPU + CPU) products for quite some time and for NVIDIA, well, you have another span of time where you don’t have the largest chip designer in the world breathing down your neck. 


Intel demoed Larrabee for the first time at IDF in September 2009

Another interesting note about the Larrabee issues and the recently discussed Intel 48-core Single-chip Cloud Computing CPU: according to Intel the two are completely unrelated, despite the similarities at first glance.  Even though they are both many-core processor designs the real goal of the SCC chip was to test Intel’s theories on inter-chip mesh communications and working with a multi-core CPU without a shared cache structure.  Apparently all they had in common was an x86 core based architecture.

Are we disappointed?  Absolutely.  But am I completely shocked?  Not really.  Intel always had an uphill battle to compete with NVIDIA and AMD in the GPU world and the delays of the Larrabee design put them so far behind that they would have likely never caught up to the performance of the Evergreen or Fermi families from their competition.  Knowing that, it was probably the better choice to admit defeat and go back to the drawing board rather than release a part that simply couldn’t hold its own in such a competitive market.  We had always given Intel the benefit of the doubt when it came to the company’s ability to create a completely new GPU architecture and re-engineer the software stack for graphics as well – but it seems now that we have been fooled.  “Fool me once, shame on you. Fool me – you can’t get fooled again…”

Further Reading: