DiRT 3
DiRT 3 (DirectX 11)
A continuation of the Colin McRae series, but without his name, DiRT 3 is one of the top racing games in the world and offers stunning imagery along with support for features of DirectX 11.
Our settings for DiRT 3
Dirt 3 was a game in our first article that actually shows CrossFire performing closer to traditional expectations, and in this case the same is seen again. The FRAPS based frame rate averages and our observed frame rates are basically identical at 1920×1080 and all three cards run this game exceedingly quickly.
The plots for Dirt 3 at 19×10 all show solid results – frame times only vary between 4ms and 10ms (with the occasional spike) result in high frame rates and solid responsiveness. The GTX 690 (blue) numbers kind of hide the HD 7970s in CrossFire results (orange) meaning the frame times are in the same ballpark. The GTX Titan shows a tighter overall frame time variance as expected with its single GPU.
Both the GTX Titan and the HD 7970s in CrossFire that emulate the HD 7990 average frame rates at about 140 FPS while the GTX 690 sits a little higher at 155 FPS or so.
This frame time variance graph that would show potential stutter kind of looks like it might paint a bad picture for the GTX 690 but keep in mind that the scale on the y-axis is only peaking at 2.5ms – none of these cards are proving to be a problem.
At 2560×1440 the HD 7970s have a single span of poor performance where the runt frame issue crops up, and in that area the observed frame rate is much lower than the FRAPS reported speeds. Otherwise, the HD 7990 will likely perform better than both the GTX 690 and GTX Titan with the Titan taking the third spot.
Frame times actually look pretty good for all three cards again with the obvious exception of that blotch of orange in the middle. However, the rest of the HD 7990 times and even the GTX 690 results show a MORE consistent frame rate than we saw at 1920×1080 which tells you that the CPU is becoming less of a bottleneck.
The single hiccup of runts draws the minimum FPS percentile data down towards the end of the graph for the HD 7970s but otherwise the AMD cards looks pretty solid again. The GTX 690 is running away from the performance of the Titan here.
Frame time variance is still pretty good for the two NVIDIA options here though you can see where the differences are between the GTX 690 and the GTX Titan.
Without proper data from the ailing AMD Eyefinity setup, we are only comparing GTX 690 and GTX Titan results – both of which are identical between the FRAPS frame rates and observed frame rates.
The plot of frame times shows very compelling reasons why the single GPU cards can still offer advantages over dual GPU cards that many might consider "slower". Notice how much more consistent the frame times are for the GTX Titan and that the spikes in time taken to render on the GTX 690s are non-existent. This is also helped by the 6GB of frame buffer on Titan (compared to 2GB per GPU of the GTX 690).
The minimum FPS graph shows this another way – the straighter and flatter the line here, the more consistent the experience. The GTX 690 does trail off a bit at the end but the results aren't overly dramatic.
Finally, our frame variance chart shows it again – the GTX Titan never sees more than a 1ms difference between the current frame time and the running average of the last 20 frame times while the GTX 690s has spikes nearing 5ms.
Ryan,
Don’t worry about the
Ryan,
Don’t worry about the negative and bias comments.
Thank you for this great review, it has opened my eyes to the cause of these problems. And hopefully a new way to review all Graphics cards in future, instead of just looking at the highest FPS numbers.
I have always thought smooth experience is better than a fast (high FPS) and choppy visual gameplay.
Hopefully AMD and Nvidia will consider these issues in there next GPU and or driver releases now it has been exposed, rather than targeting figures. This means a better gameplay experience for the consumer.
Thank you and Keep up the good work.
Ryan,
Don’t worry about the
Ryan,
Don’t worry about the negative and bias comments.
Thank you for this great review, it has opened my eyes to the cause of these problems. And hopefully a new way to review all Graphics cards in future, instead of just looking at the highest FPS numbers.
I have always thought smooth experience is better than a fast (high FPS) and choppy visual gameplay.
Hopefully AMD and Nvidia will consider these issues in there next GPU and or driver releases now it has been exposed, rather than targeting figures. This means a better gameplay experience for the consumer.
Thank you and Keep up the good work.
I think that instead of the
I think that instead of the percentile curve you could reach a more meaningful result using a derived curve(of the frametime curve).
Let’s say that the average is 60 fps.
Now let’s say that 20 percent of the frames are 25 ms(40fps).
The difference is how these 25 ms values are spread in the curve. If they are all together or if they are alternated to 17 ms ones, forming saw-like shape in the curve.
You will not have the same feeling stutter-wise
What i want to say is that the percentile graph is not appropriate for the kind of analysis that you are doing. You should use a derived curve since deriving a function measures how quickly a curve grows (negatively or positively) and this is not measured by the percentile grows. After this you could measure the area of this curve and you could also arrive to use one only number to measure the amount of stutter.Infact in this way you would bring out of the equation the part of the frametime curve that is below the average but that runs steadily.
Calculating the area of a very saw-like derived frametime curve you would obtain a high number whereas calculating the area of a smooth (even if variating) derived frametime curve you would get a very low number. This would tell you how smooth are transitions, not if the gpu is powerful enough to make the game playable. For this you should check the average fps.
So in the end if you got decent fps and very low value for the area of this function you got a great experience,
if oyu got decent fps but high derived func area value then you got stutterish experience.
If you got low fps and low value you got a underdimensioned gpu but good smoothness.
I think that instead of the
I think that instead of the percentile curve you could reach a more meaningful result using a derived curve(of the frametime curve).
Let’s say that the average is 60 fps.
Now let’s say that 20 percent of the frames are 25 ms(40fps).
The difference is how these 25 ms values are spread in the curve. If they are all together or if they are alternated to 17 ms ones, forming saw-like shape in the curve.
You will not have the same feeling stutter-wise (and here i am not saying anything new)
What i want to say is that the percentile graph is not appropriate for the kind of analysis that you are doing. You should use a derived curve since deriving a function measures how quickly a curve grows (negatively or positively) and this is not measured by the percentile curve. After this you could measure the area of this curve and you could also arrive to use one only number to measure the amount of stutter.Infact in this way you would bring out of the equation the part of the frametime curve that is below the average but that runs steadily(something that with percentile curve you cant do).
Calculating the area of the derivation of a very saw-like frametime curve you would obtain a high number whereas calculating the area of the derivation ofa smooth (even if variating) frametime curve you would get a very low number. This would tell you how smooth are transitions, not if the gpu is powerful enough to make the game playable. For this you should check the average fps.
So in the end if you got decent fps and very low value for the area of this function you got a great experience,
if oyu got decent fps but high derived func area value then you got stutterish experience.
If you got low fps and low value you got a underdimensioned gpu but good smoothness.
EDITED :I made some corrections to the post i previously wrote since it is not possible to edit it
Quick Google “geforce frame
Quick Google “geforce frame metering” and you will find out why the nVi cards rarely have runt frames. In fact, nVi cards DO have them. They just delays those frames a bit to match with other good frames’ speed, therefore the frame time chart looks good miraculously.
That’s nVidia, it’s meant to SELL, at crazy pricetags of course.