Crysis 3
Crysis 3 (DirectX 11)
Master the power of the Nanosuit. Armor, Speed, Strength and the ability to cloak are the ingredients of the most effective tactical combat armor ever created. Suit up! It's all yours in Crysis 3.
In Crysis 3 at 1920×1080, we found the Radeon HD 7970s in CrossFire fell between the GTX 690 and the GTX Titan.
Observed frame rates are much lower once again, with the emulated HD 7990 dropping well below the frame rate both NVIDIA options.
The frame time graph tells the story of what is going with a high frequency of runt frames being detected on the 7970 cards. Both the GTX 690 and the GTX Titan have very tight bands of frame times though indicating solid and consistent frame times.
Our minimum frame rate graph give a real world average over the course of the testing at 23 FPS or so, 41 FPS on the Titan and 53 FPS with the GTX 690.
Frame variance data shows that the HD 7970s basically are never running in line with each other and are in a constant state of flux. Both the GTX 690 and the GTX Titan show much less frame time deviation but you can see just a bit of a tail on the GTX 690 indicating that frame times are closer together on the Titan.
Jacking up the resolution to 2560×1440 actually moves the Radeon HD 7970s in CrossFire ahead of both NVIDIA cards in the FRAPS based data metrics, but again that changes below…
Observed frame rates are lower with some occasional spikes up.
Frame times are very erratic again with oscillations between low and expected frame times once again. There are more instances of convergence here, spans where frame times ARE consistent, but they are definitely in the minority. You might also notice that the GTX 690 frame time band is a noticeably wider and more varied than the GTX Titan now that we have moved to 2560×1440.
With the disparity between high and low frame times, as well as the converged areas of the frame time plot above, the average frame rate of both the 7990 and Titan cards are the same at the 50th percentile. However, as we slide up the scale they diverge rather quickly and NVIDIA's Titan card is maintaining higher frame rates the rest of the time. The GTX 690 is definitely the better performer here with a modest slope down (lower frame rates) towards the upper 90s levels.
Finally, our frame time variance paints the story of the HD 7970s that are representing the HD 7990 – clearly there is an issue from the outset once again. Towards the end of the graph we are seeing frame time gaps of as much as 25 ms while the GTX Titan never gets above 5 ms.
As we move up to the triple monitor resolutions we once again see the problems with the AMD Radeon HD 7970s dropping a lot of frames and HUGE artificial spikes in frame rates.
Observed frame rates paint a totally different story with the HD 7990 trailing the pack and GTX 690 and Titan sharing a lot of the same performance levels.
The plot of frame times tells us two things: first, Eyefinity is just borked with CrossFire. We don't like it! Also, the GTX 690 is definitely more variant than the GTX Titan which might mean we have a different "performance leader" in this case compared to the prior resolutions.
This is pretty interesting; we expected the HD 7970s to run slower (and they do) but take a look at how the GTX 690 and GTX Titan start out and then swap places. Clearly the Titan provides the most consistent experience from beginning to end even though it might not always be the fastest.
While we have expected the frame variance to be poor with the HD 7970s in CrossFire, the GTX 690 also takes a bit of a beating here when compared to the Titan. As we dive in to the 90th percentile and above we are seeing 20 ms and more frame time variances indicating a lot of potential stutter. The GTX Titan, with its single high powered GPU, never breaches the 7-8 ms level.
Ryan,
Don’t worry about the
Ryan,
Don’t worry about the negative and bias comments.
Thank you for this great review, it has opened my eyes to the cause of these problems. And hopefully a new way to review all Graphics cards in future, instead of just looking at the highest FPS numbers.
I have always thought smooth experience is better than a fast (high FPS) and choppy visual gameplay.
Hopefully AMD and Nvidia will consider these issues in there next GPU and or driver releases now it has been exposed, rather than targeting figures. This means a better gameplay experience for the consumer.
Thank you and Keep up the good work.
Ryan,
Don’t worry about the
Ryan,
Don’t worry about the negative and bias comments.
Thank you for this great review, it has opened my eyes to the cause of these problems. And hopefully a new way to review all Graphics cards in future, instead of just looking at the highest FPS numbers.
I have always thought smooth experience is better than a fast (high FPS) and choppy visual gameplay.
Hopefully AMD and Nvidia will consider these issues in there next GPU and or driver releases now it has been exposed, rather than targeting figures. This means a better gameplay experience for the consumer.
Thank you and Keep up the good work.
I think that instead of the
I think that instead of the percentile curve you could reach a more meaningful result using a derived curve(of the frametime curve).
Let’s say that the average is 60 fps.
Now let’s say that 20 percent of the frames are 25 ms(40fps).
The difference is how these 25 ms values are spread in the curve. If they are all together or if they are alternated to 17 ms ones, forming saw-like shape in the curve.
You will not have the same feeling stutter-wise
What i want to say is that the percentile graph is not appropriate for the kind of analysis that you are doing. You should use a derived curve since deriving a function measures how quickly a curve grows (negatively or positively) and this is not measured by the percentile grows. After this you could measure the area of this curve and you could also arrive to use one only number to measure the amount of stutter.Infact in this way you would bring out of the equation the part of the frametime curve that is below the average but that runs steadily.
Calculating the area of a very saw-like derived frametime curve you would obtain a high number whereas calculating the area of a smooth (even if variating) derived frametime curve you would get a very low number. This would tell you how smooth are transitions, not if the gpu is powerful enough to make the game playable. For this you should check the average fps.
So in the end if you got decent fps and very low value for the area of this function you got a great experience,
if oyu got decent fps but high derived func area value then you got stutterish experience.
If you got low fps and low value you got a underdimensioned gpu but good smoothness.
I think that instead of the
I think that instead of the percentile curve you could reach a more meaningful result using a derived curve(of the frametime curve).
Let’s say that the average is 60 fps.
Now let’s say that 20 percent of the frames are 25 ms(40fps).
The difference is how these 25 ms values are spread in the curve. If they are all together or if they are alternated to 17 ms ones, forming saw-like shape in the curve.
You will not have the same feeling stutter-wise (and here i am not saying anything new)
What i want to say is that the percentile graph is not appropriate for the kind of analysis that you are doing. You should use a derived curve since deriving a function measures how quickly a curve grows (negatively or positively) and this is not measured by the percentile curve. After this you could measure the area of this curve and you could also arrive to use one only number to measure the amount of stutter.Infact in this way you would bring out of the equation the part of the frametime curve that is below the average but that runs steadily(something that with percentile curve you cant do).
Calculating the area of the derivation of a very saw-like frametime curve you would obtain a high number whereas calculating the area of the derivation ofa smooth (even if variating) frametime curve you would get a very low number. This would tell you how smooth are transitions, not if the gpu is powerful enough to make the game playable. For this you should check the average fps.
So in the end if you got decent fps and very low value for the area of this function you got a great experience,
if oyu got decent fps but high derived func area value then you got stutterish experience.
If you got low fps and low value you got a underdimensioned gpu but good smoothness.
EDITED :I made some corrections to the post i previously wrote since it is not possible to edit it
Quick Google “geforce frame
Quick Google “geforce frame metering” and you will find out why the nVi cards rarely have runt frames. In fact, nVi cards DO have them. They just delays those frames a bit to match with other good frames’ speed, therefore the frame time chart looks good miraculously.
That’s nVidia, it’s meant to SELL, at crazy pricetags of course.