Sleeping Dogs – HD 7970 versus GTX 680
Our Sleeping Dogs testing shows that the game is quite a bit more GPU intensive (at least at our quality settings) than I had originally thought. The initial test results have the HD 7970 a solid margin faster than the GTX 680 and HD 7970s in CrossFire an even bigger advantage ahead of GTX 680s in SLI.
But, just as we witnessed with BF3 and Crysis 3, there is definitely a problem brewing for AMD. At 1920×1080 the frame times are very consistent for the single cards and NVIDIA’s SLI solution but for the AMD cards in CrossFire, the experience is a mess, filled with runt frames.
The result is an observed frame rate average well below what would be reported by FRAPS and essentially no faster than a single HD 7970 graphics card. Interestingly, the spikes of higher observed frame rate match up perfectly with the very few “tight” areas of our frame time map.
The good news for AMD is that the HD 7970 is consistently faster than the GTX 680 in a single card test scenario and the results are very even. The very bad news of course is that two Radeon HD 7970s in CrossFire are only as fast as a single card when looking at your perceived frame rates. That gives the GTX 680s in SLI the win almost by default.
Our ISU based stutter graphic shows a very similar story with the CrossFire frame times easily pull away from the rest of the group by the 80th percentile.
At 2560×1440 the story starts once again with the HD 7970 outpacing the GTX 680 and CrossFire running much faster than the GTX 680s in SLI.
The plot of frame times tells the whole story though and the inconsistent frame times and runts that are plaguing the CrossFire technology.
Thus, the observed FPS we see here is much much lower than what FRAPS sees and in fact is just barely faster than the performance of a single card!
The minimum FPS percentile graph shows another angle of the same problem for AMD – the frame rates are identical for single and dual GPU combinations. Despite the fact that NVIDIA’s GTX 680 is slower than the HD 7970, with SLI working correctly and efficiently it is able to scale Sleeping Dogs from 24 FPS average through the entire run to about 46 FPS.
Our graph of ISU actually shows us that when we take out the runts the frame time variance from the CrossFire cards is actually kind of minimal up until the 90th percentile after which it skyrockets.
We couldn’t get appropriate results with the 5760×1080 testing on Sleeping Dogs and the HD 7970 CrossFire setup so instead we are looking at just another 3 card set of graphs. At this setting we have turned down the AA from the Extreme setting to High, which explains the capability for these GPUs to run at all in this resolution. The GTX 680 is once again slower than the HD 7970 though the 680s scale correctly and quite well taking performance from 30 FPS on average to 51 FPS or so.
There is a bit more frame time variance than I would like to see with the GTX 680s in SLI with a few dozen hitches obviously seen in our image above. Single card results continue to be level and reliable for both AMD and NVIDIA.
The observed frame rate remains the same from the FRAPS metrics.
The Min FPS percentile graph shows us the value of consistent frame times on the GTX 680 – it is nearly a straight line across the screen, starting at 30 FPS at the 50% mark and only dropping to 27 FPS at the 99% level.
The frame time variance graph (our ISU) shows a very flat pattern with the GTX 680 and HD 7970 cards all the way through the test runs though the SLI configuration does see more potential for stutter with a rising line as we hit the 99th percentile.
What would a game look like
What would a game look like if
it got a smooth 120+ fps,
was on a 60Hz display,
and in addition to the regular ‘vsync’ spot, it would ‘vsync’ at the ‘1/2’ way spot?
aka update the display at the top and middle, updating at these same two spots every time, and only updating at these two spots.
Would the middle ‘vsync’ spot be annoying? helpful? noticed? informative? etc…? (This sounds like a good way to see how important fps is)
What’s a good name for this?
1/2 x vsync, 2 x vsync, vsync 1/2 x, vsync 2 x, or something else?
What’s the logic behind your pick(s)?
(forward note: i have bad
(forward note: i have bad english.)
1) is there a diff in observer fps between cards with more ram?
i.e. sli of 2x gtx770 2g vs sli of 2x gtx770 4g?
2) can you publish a min/max/var of partial frame per frame?
insted of runt i wanna know how many different “color” are per frame, and if they are evenly spread.
Nice review. I’m interested
Nice review. I’m interested as to how this tech is evolving.
But now I’m curious- I’ve read some of your test methods- but I may have missed something. I’ve seen mostly games that are more single player/first-person. Is that part of your methodology? I’m thinking of more intensive object rendering titles like Rome Total War II that has to render myriads of objects and stress memory more. Have you considered something like that?