Pentium Strikes Again!
Intel’s Larrabee architecture could be the second coming of 3D graphics and high performance computing, but it is still a year away and there are many who are questioning Intel’s direction of using simple X86 processors to achieve performance. Today Intel has given us a slightly more in depth look at what they hope to be the future of 3D graphics.
Larry Seiler is exceptionally proud to present some of the first bits of information on the architecture we know as Larrabee.
Several years back Intel let loose that it was developing a new and unique graphics architecture that is aimed at both the enthusiast 3D market, as well as stream computing. This announcement brought forth an abundance of opinions which covered the emotional spectrum from great excitement to great dread. The excitement was that a proven and innovative processing giant was entering the graphics arena, and potentially bringing in a competitive and potentially high performance part which will provide competition for both AMD (ATI) and NVIDIA. The great dread was that a proven and innovative processing giant was entering the graphics arena, and potentially bringing in a competitive and potentially high performance part which will provide competition for both AMD (ATI) and NVIDIA. All joking aside, other fears are primarily based around Intel’s previous graphics offerings, namely their poorly supported and poorly performing integrated parts.
While Intel has not come out with an actual working part which will either allay or reinforce our fears, they have given us insight into the inner workings of the Larrabee architecture. Most of the information that was presented was from a very high level perspective, and they did not get into the specifics that many were hoping to hear (namely some concrete details for potential parts to be released next year). What we were presented with did answer some of the more basic questions on how the architecture works, how Intel is dividing up the workload, and how flexible we can expect this part to be.
Intel asked itself, “What is better?” If they were to take two products of similar die size, what kind of results would they get in theoretical vector throughput? In this case using a simpler X86 processor with a vector unit rather than a SSE unit, they can achieve 20x more theoretical throughput with the “simpler” product.
NVIDIA and AMD have been doing graphics for quite some time now, but Intel is still relatively new to the scene. NVIDIA was started in 1994 while ATI was founded in 1985. Both companies have extensive IP portfolios relate to 2D/3D graphics, and a lot of “previous art” to rely upon in case of copyright infringements. Intel’s foray into graphics started in 1999 with the purchase of Real3D. The i740 graphics chip was based on the Real3D Starfighter, but it was late to market as compared to other solutions. While it featured advanced AGP interconnect technology, its overall performance was severely lacking as compared to other products of that time. Intel then concentrated on using the i740 technology in a lineup of integrated graphics parts which went unchanged for quite a few years.
So when Intel announced that it was embarking on a path to release an enthusiast level 3D graphics product, its claims were primarily met with skepticism. At that time NVIDIA was riding the wave of profitability from introducing its GeForce 8 series of parts with 128 steam processing units running at 1.3 GHz+, and AMD had released the HD 2000 series with the top end part featuring 320 stream processing units. Each of these products has a theoretical performance which is pretty outstanding when dealing with single precision floating point operations. So in many peoples’ minds the bar is set very high for Intel to jump into the fray with a high end part.
Intel is trying to do what 3DLabs attempted with the P10 all those years back. A fully programmable rendering pipeline. AMD and NVIDIA currently use the DX8-DX10 style pipeline with programmable portions sandwiched between fixed function units (which is not necessarily a bad thing when talking about performance in these apps).
The very basis of the Larrabee architecture is the part which most people are having trouble with. While their ideas behind what a next generation architecture should be are unique and novel, they are confusing to many who are used to seeing up to hundreds of simpler units doing the work behind 3D graphics.