Details from GDC 2009
Intel was promoting Larrabee quite heavily at GDC in San Francisco this week even though there is still no hardware to be found. For the GDC audience, Intel decided to share a Larrabee instruction emulation layer so that developers can now start to see how the new x86-based GPU design can be efficiently utilized.
If you haven’t heard about Larrabee yet, you have been living under a rock. We have been following progress on the project since 2006 when we first heard mumblings about it – we can sum up the project very quickly with this. Larrabee is a many-core architecture design essentially based around a Pentium core with new features and vector processing units to better support floating point math used in graphics and parallel computing. Larrabee’s architecture will first be used to target the discrete graphics market and Intel believes that their design compared to current NVIDIA and AMD GPUs will enable “the next decade of innovation.”Single Larrabee core diagram
We have a decent amount of details about the architecture behind Larrabee – Josh wrote up a great article about what we know and don’t know that you should definitely read over if you haven’t done so already. Using many instantiations of an in-order processor design with a new ring-bus-style interconnect for memory and coherent L2 cache across all cores, Larrabee will be a dramatic shift away from the shader processing designs used today by both major GPU vendors.
The cores (unknown how many and what clocks they will be released at) will support the current x86 ISA, each core will support up to four threads, but also include a new vector processing unit. This 512-bit wide SIMD unit is what the new Larrabee New Instruction set targets. (LRBni for short.)
What Intel is announcing and showing off at the Game Developers Conference today in San Francisco is the first details about the C++ Larrabee library prototype and that it will be made available (immediately we are told) to developers to begin using and testing with. Intel believes this will push developers to “explore the efficiency and flexibility” of Larrabee while providing useful feedback.
It is important to note that this does NOT mean Larrabee hardware is also going to find its way to these developers hands yet; the software is essentially a Larrabee emulator that compiles and “runs” the code on the primary system CPU. The current setup will allow developers to start playing with as much as a 16-wide SIMD and to see how the programming model for Larrabee is similar to, and differs, from traditional x86 programming.
In an interview for an Intel-developed magazine, the hardware and software architect behind Larrabee, Tom Forsyth, brought up a couple of interesting new points. First, it was revealed that the Larrabee architecture has been locked down for well over a year now – everything they are working on at this point is optimization in physical hardware design and on the software side. Forsyth also mentioned a couple of near-term features that Larrabee will offer the rasterization pipeline. They include render-target reads (shaders that can read and write the current target to enable more custom blends), demand-page texturing (ability to read from a texture when not all of it exists in memory) and order-independent translucency (allows translucent objects to be rendered just like any other surface type while the GPU does ordering for lighting effects, etc). These are features today that COULD be done with standard GPUs today but have inherent performance penalties for doing so.
If you aren’t a coder and developer, chances are the announcement today just won’t be that interesting. What we were hoping to get from Intel was some sort of scaling factor they would provide to developers for this emulation-mode CPU based test of LRBni and performance of the actual product when it’s released. That would give us SOME kind of hints as to what to expect from Larrabee designs next year. Obviously that didn’t happen; and I wonder how many developers are going to be willing to spend time learning a new programming model for games without more information on the resulting performance benefits.
That being said, this is an important first step to Larrabee’s adoption and we will be monitoring developer response to it very closely.
Additional Ready on Larrabee, ray tracing and more:
- Intel’s Larrabee Architecture
- Intel IDF Preview: Tukwilla, Dunnington, Nehalem and Larrabee
- NVISION08 Summary – Keynote, TWiT UGM, 3D Gaming and GPU Ray Tracing
- Ray Tracing in Games: A Story from The Other Side
- Crytek’s Cevat Yerli Speaks on Rasterization and Ray Tracing
- John Carmack on id Tech 6, Ray Tracing, Consoles, Physics and more
- NVIDIA Comments on Ray Tracing and Rasterization Debate