Battlefield 3
Battlefield 3 (DirectX 11)
Battlefield 3™ leaps ahead of its time with the power of Frostbite 2, DICE's new cutting-edge game engine. This state-of-the-art technology is the foundation on which Battlefield 3 is built, delivering enhanced visual quality, a grand sense of scale, massive destruction, dynamic audio and character animation utilizing ANT technology as seen in the latest EA SPORTS™ games.
Frostbite 2 now enables deferred shading, dynamic global illumination and new streaming architecture. Sounds like tech talk? Play the game and experience the difference!
Our Settings for Battlefield 3
Here is our testing run through the game, for your reference.
While all three cards are able to keep Battlefield 3 running well at 1920×1080, in the FRAPS based information the HD 7970s in CrossFire that are emulating the HD 7990 are clearly the performance leader followed by the GTX 690 with the GTX Titan rounding things out.
Well, things change quickly around these parts and you can see that for the HD 7990 the observed frame rate after removing any runts or drops has come down considerably.
Our plot of frame times from the Frame Rating capture technology shows two interesting items. First, the HD 7970s in CrossFire result in an alternating fast/slow frame times, usually indicative of runts, frames that take up so little of the screen's scanlines that they aren't positively affecting apparent performance. Also, even though the GTX 690 has better frame rates than the GTX Titan, it definitely has more frame time variance as evident by the wider blue band of color on the image above.
Minimum frame rates after taking out the runts result in an observed FPS average of about 102 FPS for the HD 7970s, 120 FPS for the GTX Titan and 140 FPS for the GTX 690.
But once we look at the variance picture again we find that the GTX 690 and the GTX Titan have swapped places, with the single GPU performance of the Titan resulting in a smoother overall experience than even the GTX 690. Both NVIDIA solutions are drastically better than the HD 7990 / HD 7970s in CrossFire.
At 2560×1440 the FRAPS results look pretty similar to those above…
But once we take away the runts and drops we find the HD 7970s in CrossFire fall behind the performance of both of the NVIDIA GeForce cards.
Ouch, another blanket of color from the Radeon solution that indicates unsmooth and inconsistent frame rates! If we look just as the NVIDIA side of the equation we again see the a thinner band of color on the GTX Titan results that indicates tighter and more consistent frame times throughout the benchmark run.
Here is an individual run graph for the HD 7970s in CrossFire to help demonstrate how the runts cause the observed frame rates to be lower.
And the two runs for the GTX 690 and the GTX Titan do not indicate any runts at all…
These minimum FPS percentile charts show some pretty dramatic differences, even between the competing NVIDIA options. The HD 7970s in CrossFire average around 60 FPS, the GTX Titan at 75 FPS and the GTX 690 at 95 FPS.
But the frame variance results, our ISUs (International Stutter Units), once again prove that the single GPU solution has a more consistent and fluid frame time result with the HD 7970s in CrossFire really seperating themselves (not in a good way) starting at the 80th percentile.
Even though we only have NVIDIA results for 5760×1080 due to the extreme amount of dropped frames on the HD 7990 / HD 7970s, comparing these two options is interesting. In both FRAPS and observed average frame rates per second, the GTX 690 is showing as the faster of the two options, running faster than the Titan the entire time.
But the plot shows an interesting story – the frame times on the GTX 690 are not as consistent or as smooth as on the GTX Titan. They are averaging much lower, based on the where the bulk of the blue band resides in comparison to the green band, but the spikes that show themselves on the GTX 690 are gone completely with the GTX Titan.
If we look at only the minimum FPS marks we find the GTX 690 to be 33% faster in average frame rate over the entire run, but based on the graph above (and the one below) that isn't the whole story.
Here we see the result of all of those "spikes" in frame times – a pretty sizeable difference in frame variance going from the GTX 690 to the GTX Titan. While the Titan never has more than 2.5 ms of variance from one frame against the running average of the past 20, the GTX 690 has 8-9 ms jumps at times, which will likely cause some noticeable stutter.
There are two take aways from this first page of results. First, the AMD Radeon HD 7990 or HD 7970s in CrossFire are not going to compare well to the GTX Titan or GTX 690 in many cases because of the runt and dropped frame issues we have detailed. Second, while the GTX 690 may be "faster" than the GTX Titan in Battlefield 3, at higher resolutions and especially for multi-monitor situations, the GTX Titan looks to provide the better overall experience.
Ryan,
Don’t worry about the
Ryan,
Don’t worry about the negative and bias comments.
Thank you for this great review, it has opened my eyes to the cause of these problems. And hopefully a new way to review all Graphics cards in future, instead of just looking at the highest FPS numbers.
I have always thought smooth experience is better than a fast (high FPS) and choppy visual gameplay.
Hopefully AMD and Nvidia will consider these issues in there next GPU and or driver releases now it has been exposed, rather than targeting figures. This means a better gameplay experience for the consumer.
Thank you and Keep up the good work.
Ryan,
Don’t worry about the
Ryan,
Don’t worry about the negative and bias comments.
Thank you for this great review, it has opened my eyes to the cause of these problems. And hopefully a new way to review all Graphics cards in future, instead of just looking at the highest FPS numbers.
I have always thought smooth experience is better than a fast (high FPS) and choppy visual gameplay.
Hopefully AMD and Nvidia will consider these issues in there next GPU and or driver releases now it has been exposed, rather than targeting figures. This means a better gameplay experience for the consumer.
Thank you and Keep up the good work.
I think that instead of the
I think that instead of the percentile curve you could reach a more meaningful result using a derived curve(of the frametime curve).
Let’s say that the average is 60 fps.
Now let’s say that 20 percent of the frames are 25 ms(40fps).
The difference is how these 25 ms values are spread in the curve. If they are all together or if they are alternated to 17 ms ones, forming saw-like shape in the curve.
You will not have the same feeling stutter-wise
What i want to say is that the percentile graph is not appropriate for the kind of analysis that you are doing. You should use a derived curve since deriving a function measures how quickly a curve grows (negatively or positively) and this is not measured by the percentile grows. After this you could measure the area of this curve and you could also arrive to use one only number to measure the amount of stutter.Infact in this way you would bring out of the equation the part of the frametime curve that is below the average but that runs steadily.
Calculating the area of a very saw-like derived frametime curve you would obtain a high number whereas calculating the area of a smooth (even if variating) derived frametime curve you would get a very low number. This would tell you how smooth are transitions, not if the gpu is powerful enough to make the game playable. For this you should check the average fps.
So in the end if you got decent fps and very low value for the area of this function you got a great experience,
if oyu got decent fps but high derived func area value then you got stutterish experience.
If you got low fps and low value you got a underdimensioned gpu but good smoothness.
I think that instead of the
I think that instead of the percentile curve you could reach a more meaningful result using a derived curve(of the frametime curve).
Let’s say that the average is 60 fps.
Now let’s say that 20 percent of the frames are 25 ms(40fps).
The difference is how these 25 ms values are spread in the curve. If they are all together or if they are alternated to 17 ms ones, forming saw-like shape in the curve.
You will not have the same feeling stutter-wise (and here i am not saying anything new)
What i want to say is that the percentile graph is not appropriate for the kind of analysis that you are doing. You should use a derived curve since deriving a function measures how quickly a curve grows (negatively or positively) and this is not measured by the percentile curve. After this you could measure the area of this curve and you could also arrive to use one only number to measure the amount of stutter.Infact in this way you would bring out of the equation the part of the frametime curve that is below the average but that runs steadily(something that with percentile curve you cant do).
Calculating the area of the derivation of a very saw-like frametime curve you would obtain a high number whereas calculating the area of the derivation ofa smooth (even if variating) frametime curve you would get a very low number. This would tell you how smooth are transitions, not if the gpu is powerful enough to make the game playable. For this you should check the average fps.
So in the end if you got decent fps and very low value for the area of this function you got a great experience,
if oyu got decent fps but high derived func area value then you got stutterish experience.
If you got low fps and low value you got a underdimensioned gpu but good smoothness.
EDITED :I made some corrections to the post i previously wrote since it is not possible to edit it
Quick Google “geforce frame
Quick Google “geforce frame metering” and you will find out why the nVi cards rarely have runt frames. In fact, nVi cards DO have them. They just delays those frames a bit to match with other good frames’ speed, therefore the frame time chart looks good miraculously.
That’s nVidia, it’s meant to SELL, at crazy pricetags of course.