And I don’t Mean “Cold”
With recent accusations of “They’re cheating again”, I decided to dig a bit further and talk to the a few of the players in the 3D graphics world. The results may not be surprising, but the journey certainly was interesting!
A certain article was published on the web claiming that NVIDIA was cheating on their 3D Mark Vantage scores by using a special driver which enabled GPU physics on CPU Test #2. At first glance we have visions of 2003 when NVIDIA did some interesting things with their driver optimizations to increase their 3D Mark score at that time. Once we start reading though, one starts to scratch their head in confusion.Cheating is a bad word, especially in this industry. If someone finds out you have been cheating (be it on 3D Mark or doing LOD tricks in Quake/Quack) then the repercussions can last for years. So saying that NVIDIA is cheating on this matter is a pretty grave accusation. To check out the veracity of this statement, I decided to talk to quite a few different people that are directly connected to 3D Mark and the graphics industry.
While it is easiest to go directly to NVIDIA, I thought I would dip into a few more ponds for information. So for one hectic afternoon I scheduled calls with Roy Taylor of NVIDIA, Dave Erskine and David Baumann of AMD, Mark Rein of Epic Games, and finally Oliver Baltuch of Futuremark.
The main point of contention here is obviously the use of a specialized driver which enables GPU Physics. These drivers apparently send all of the dedicated Physics Processing Unit calls to the GPU. This allows the application to “see” a standalone PPU onboard that it can use. Now, is exposing this functionality a cheat? Or is it merely a technology demonstration?
Original PhysX PPU chip
They Call me Roy “Admiral Lord Nelson” Taylor
I was given the chance to have a lengthy chat with Roy about the situation, as well as what NVIDIA is hoping to do with their PhysX platform. Unfortunately, I did not have a recorder on the phone I was using, and my shorthand is pretty pathetic, so I am summarizing most of the interview that I had with him (as well as those of the others). I also strayed away from the primary purpose of the interview to dig a bit deeper into how NVIDIA is handling GPU Physics.
Of course Roy thought the idea that they were cheating was bunk. This is not a surprising statement given that he works for NVIDIA. NVIDIA bought up Ageia to utilize the technology that has been developed for this standalone physics API. Their entire goal was to eventually integrate the physics middleware into GPU physics. This was pretty evident once all production stopped on standalone PPU cards.
The question we must ask is, if a company puts out a product which accelerates a certain test in a benchmark, but can also be used in upcoming games if they are coded for it, does it merit the accusation of cheating? To get a better idea of this, we have to go back a bit. Two years ago the people at Futuremark decided to step away from Havok and go with Ageia’s PhysX middleware. The thinking behind that was Havok had been bought by Intel, and the PhysX software worked equally well on AMD and Intel processors. It also offered the ability to run a dedicated Physics Processing Unit, which is really the ultimate in enthusiast hardware at the time.
While the folks at NVIDIA did not come right out and say it, but considering how easy it is to enable GPU physics in 3D Mark Vantage, it is obvious that the program was created from the get-go to utilize accelerated physics and GPU physics. In the control panel the user can enable/disable the onboard PPU. Since the GPU is doing the PPU’s work, how exactly is that cheating? If the GPU were only intercepting and acting on only some calls, then we can say that this optimization may not be above board. But in testing the GPU physics driver is intercepting and acting upon all of the calls from the application in regards to physics.
It was very obvious from the beginning that the folks I were talking to at NVIDIA were quite confused about the sentiment that was being bandied about the internet on June 24, 2008. So while I had cleared up their opinion of the situation, I thought it time to dive into more questions about GPU accelerated physics.
Why exactly is NVIDIA doing this? Some think they are picking up the last physics middleware developer to keep it away from other companies, and for a business reason that is certainly true. What we must look at though is that NVIDIA is a very technology driven company. Rarely do they buy up a company and just sit on the technology. The technology they do come by eventually shows up in products, and in this case they are pursuing PhysX because it will help to make games that much more immersive.
The example that Roy gave me was about an unnamed and unreleased title which uses PhysX, and supports GPU accelerated physics. This title focuses on sea battles in the mid 1800’s using wooden ships. In one scene if the user is able to hit the opposing vessel in the powder magazine, the ship explodes with a thousand pieces being thrown into the air, each piece making an individual splash and wave when it hits the water. Apparently the scene is quite breathtaking, and the coolness factor is simply second to none. This is all done with GPU physics, and if that is turned off then the ship merely explodes with no shrapnel and just sinks into the ocean.
When I asked what kind of overhead they were seeing in such situations, they responded pretty straight forward. On products such as the GTX 280, it can run 1920×1200 resolutions with 4X/16X AA and full 16X AF all day long, and will continue to do so when GPU physics is enabled. When things get to 2560×1600 then it will slow down quite a bit as compared to having GPU physics turned off. So there is an overheard, but the top end products have more than enough horsepower to handle it in most situations. We can assume that the earlier GeForce 9000 and 8000 series will not fare nearly as well in such situations, but they are aiming this for the enthusiast who is willing to throw down the money for these high end cards.
Overall performance is also affected by what effects are being used. The current GTX 280 is able to handle particle physics about 20X faster than a high end quad core CPU. When you start looking into fluids and soft body effects, then it drops between 5X to 8X the performance of that same quad core. So mileage is going to vary on how well these effects are rendered and at what speed.
Another worry is about the usage overhead when dealing with a graphics driver and CUDA operating together on one GPU. So far it does not appear to be that much of a problem. Yes, there is going to be more overhead from the OS standpoint, but the development of the graphics driver and CUDA have been going on hand in hand for several years now. So when enabling GPU physics, we can expect to see a bit more CPU usage to handle how data is sent to the GPU, but the overall effect should be pretty minimal according to NVIDIA.
I also asked Roy about what they were doing in terms of asymmetric “physics SLI”. What I mean here is using the bigger/badder card to do all the rendering (vertex, geometry, and pixel shading) while a smaller/cheaper/cooler running card could handle all of the dedicated physics work. Of course the boys were not willing to give out any specifics on unannounced future products, but they were willing to say that they are looking down all performance avenues. If such a setup makes sense and works (and I think it would), then they will develop it. While NVIDIA would like to sell as many high end cards as possible, they surely would not be offended if someone bought a high end card along with a mid-range or budget card to handle the physics offload.
Future gaming heaven? Graphics, physics and more
The final area I inquired into was the use of their middleware in professional applications. When I touched upon this Roy instantly said, “There is TREMENDOUS interest in professional applications in which this technology could be used.” He did not go into detail on this subject obviously, but considering what NVIDIA is doing with their CUDA programming language and their Tesla initiative, it is not exactly a giant leap to consider that NVIDIA would market a professional version of PhysX to quite a few industries and research institutions. Some applications I could foresee this being used in would be crash simulations for the automobile/marine/aerospace industries (combination of rigid body physics, impact, materials, explosions, etc.), fluid dynamics (modeling injectors while simulating the cohesive/adhesive/viscosity of the fluid), and eventually lower level atomic interactions which could be a tremendous boost for high energy physics.
Obviously NVIDIA is very excited about the prospects of GPU physics, and if the uptake is good, then the users and consumers are the big winners. Instead of having to buy a standalone PPU which may not make sense considering the amount of games out that support it, the user will update the drivers on their video card and get that functionality for free.
Oh, and “Trafalgar” does not sound nearly as nifty as when spoken in its native tongue.