Calculus, bitches!

As a CS major, calculus is all too familiar to me.  Hopefully, not all of you are so fortunate…

The rest of the graphical computation story is brought to life by calculus, the math that allows us to approximate everything from areas to light rays.  A first order approximation is the most basic form that is the easiest to compute the least accurate in its result.  For each successive iteration of the approximation, quality improves but the mathematical load placed on the processor increases dramatically. 

Doom – 1993

Doom was the first game to really attempt to simulate reality and used the most basic of first order approximations to generate the graphics with a single bounce of light from the texture or sprite to the 3D camera being manipulated.

Unreal – 1998

Sweeney’s own Unreal was an early adopter of the second-order approximation, allowing light rays to bounce off of two things before converging on the virtual camera.  According to Sweeney 99% of today’s games on the console and PC are still using engines based on this type of simulation method.

Samaritan Demo – 2011

Third-order approximations are much more complex but allow light to bounce through many items and as a result you are seeing more realistic reflections, skin coloring and highlights.  The Samaritan demo from Epic Games shown last year is the company’s investment in this type of computing and requires several of today’s fastest GPUs to be rendered with even minimal interaction and lower than desired frame rates. 

As an interesting demonstration Sweeney gives us the approximate computing power of these titles.  The original Doom required 10 MFLOPS (millions of floating point operations) to run at 320×200 in 1993 while Unreal required 1 GFLOPS (billions) to run at 1024×768 in 1998; both at 30 Hz. 

The Samaritan Demo that Epic unveiled in 2011 was running at 1920×1080, still at only 30 frames per second, with more than 40,000 operations per pixel.  The total required GPU computing power was 2.5 TFLOPS (take a guess?) and it was running on three GTX 580 cards if my memory serves.  Of note is that the latest Radeon HD 7970 GPU is capable of a theoretical computing power of 3.79 TFLOPS though how much of that an engine like Samaritan could utilize has yet to be determined fully.

Also interesting to see for those "consoles are good enough" users, take a look at the compute power of the Xbox 360 – only 250 GFLOPS, a tenth of the required power to run Samaritan. 

And while the visual quality we are seeing in games like Battlefield 3 and the Samaritan demo are impressive, Sweeney was quick to point out that to truly get to the level of movie-quality lighting we will need to progress even beyond the third-order approximations we are seeing as the limit today.  It is likely we will need to see another order of magnitude increase in computing power in order to reach that 4th level – PetaFLOPS are in our future.

Because we know (well, someone knows) and completely understands how lighting, shadows, skin, smoke and other complex visual challenges worked in the scientific sense, they can be approximated accurately.  And with enough orders of approximation, as we have seen in the slides above, we can get very close to perfection.  Sweeney estimates that we will need around 5000 TFLOPS of performance, or 5 PFLOPS, to reach that goal.  That is a factor of 2000x today’s best GPU hardware and leaves a lot of room for development from NVIDIA, AMD and even Intel until we reach it.

Even Sweeney wouldn’t put a time frame on the goal of hitting that 5000 TFLOPS mark but depending on the advancement of both process technology and the drive of designers at these technology firms, it could be as soon as 5 years or as long as 10.  If we go over that, I feel we have wasted the potential of the hardware available to us. 

What may hinder the game development community is not the graphics and lighting challenges but rather the need to approximate things that we just don’t have algorithms for yet.  Show me a textbook that details human thought (for AI), movement (for animation) or personality (for believable characters) in a mathematical formula and I’ll show you Josh’s natural hair growing back in.  Sweeney states in his talk that we simply lack the fundamental understanding of these things and even with an infinite amount of computing horsepower our developers would be lost. 

 

There is quite a bit more that you can learn from listening to the remainder of Tim Sweeney’s talk (found over at Gamespot) including his thoughts on Moore’s Law, stacked transistors and the future of computer interaction.  For me though, the primary point to be made was that the idea of the hardware races being over because of a lack of NEED for additional computing horsepower is simply false.  The hardware races might slow down due to a lack of competition or without the profitable discrete add-in card market to drive it, but those are tangential reasons at best. 

With one of our current top gaming software developers telling us we need something on the edge of 2000x performance increases compared to current GPU hardware to reach our peak of creativity, I have much more faith that hardware companies like AMD, Intel, NVIDIA and others we might not have heard of yet, are busy working on the future.  

« PreviousNext »