What to Look For, Test Setup
We take another trip down the road of Frame Rating with the GeForce GTX 660 Ti and the Radeon HD 7950.
Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons. Here is the schedule:
- 3/27: Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing
- 3/27: Radeon HD 7970 GHz Edition vs GeForce GTX 680 (Single and Dual GPU)
- 3/30: AMD Radeon HD 7990 vs GeForce GTX 690 vs GeForce GTX Titan
- 4/2: Radeon HD 7950 vs GeForce GTX 660 Ti (Single and Dual GPU)
- 4/5: Radeon HD 7870 GHz Edition vs GeForce GTX 660 (Single and Dual GPU)
We are back again with another edition of our continued reveal of data from the capture-based Frame Rating GPU performance methods. In this third segment we are moving on down the product stack to the NVIDIA GeForce GTX 660 Ti and the AMD Radeon HD 7950 – both cards that fall into a similar price range.
I have gotten many questions about why we are using the cards in each comparison and the answer is pretty straight forward: pricing. In our first article we looked at the Radeon HD 7970 GHz Edition and the GeForce GTX 680 while in the second we compared the Radeon HD 7990 (HD 7970s in CrossFire), the GeForce GTX 690 and the GeForce GTX Titan. This time around we have the GeForce GTX 660 Ti ($289 on Newegg.com) and the Radeon HD 7950 ($299 on Newegg.com) but we did not include the GeForce GTX 670 because it sits much higher at $359 or so. I know some of you are going to be disappointed that it isn't in here, but I promise we'll see it again in a future piece!
If you are just joining this article series today, you have missed a lot! If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with. In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.
Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display. Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.
The capture card that makes all of this work possible.
I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating. So, please check out my first article on the topic if you have any questions before diving into these results today!
Test System Setup | |
CPU | Intel Core i7-3960X Sandy Bridge-E |
Motherboard | ASUS P9X79 Deluxe |
Memory | Corsair Dominator DDR3-1600 16GB |
Hard Drive | OCZ Agility 4 256GB SSD |
Sound Card | On-board |
Graphics Card |
NVIDIA GeForce GTX 660 Ti 2GB AMD Radeon HD 7950 3GB |
Graphics Drivers |
AMD: 13.2 beta 7 NVIDIA: 314.07 beta |
Power Supply | Corsair AX1200i |
Operating System | Windows 8 Pro x64 |
On to the results!
Another article to confirm
Another article to confirm that with vsync off CFX is nerfed atm…
Which doesn’t affect me in the slightest as i don’t play with vsync off .. i won’t play with vsync off fullstop.
Twin lightning 7970s @ 1400 mhz , 60 fps solid in crysis 3 with RP 60 DFC and vsync on and its butter smooth. Slightly different to the results i see in this article i might add.
My advice to any CFX user is simply run with vsync on and radeon pro DFC (dynamic frame control) 60 fps with vsync and you won’t get 90 % of the issues in these articles.
It is a pity that pcper won’t simply do a review with vsync on running radeon pro .. i guess they wouldn’t have much to write about then.
I don’t think dropped frames
I don’t think dropped frames are as significant to the viewing experience as the runt frames are which also cause small tears. Also, keeping the image being displayed the most current representation of the game state would seem to support dropping a frame if it gets in the way of a newer frame.
That said, I really want to see some rules of thumb applied to the graphs as to what consitutes perceptable gameplay degredation. For example in the frame variance chart, I would think there is a slope based assessment as to the type of problem the person would percieve.
The idea of “rules of thumb
The idea of "rules of thumb applied to the graphs as to what consitutes perceptable gameplay degredation" is a good idea, but it is much harder to actually do. That is the eventual goal but in all honestly it will likely be an issue that changes for each gaming title.
I was thinking about how many
I was thinking about how many FPS compared to your refresh rate has an effect on what the average frame size would be. This might mean what is considered a runt frame might be something that should scale based on FPS. This may also help narrow down when something is not acceptable.
Yeah, the hidden-agenda
Yeah, the hidden-agenda accusations are rather bizarre considering the AMD ads, but any tech site is going to get its share of unstable posters. don’t let it get to ya.
Anyway, AMD’s single gpus came out looking nicer than I would have figured. My experience with previous gens had me expecting to see disparity in observed frames for the single cards as well. I’d also like to see tests on the last gen from both manufacturers, but I know that may not be a practical use of copy.
Window cause more of those
Window cause more of those issue then everything else combined.that is what I mean with the long post Ryan didn’t understand.
You have to use radeon pro,
You have to use radeon pro, and set up a profile for each game being used on Crossfire setups. Its the only way to get them to run smoothly. Its an extra program and a bit of work but the results are promising, regarding frame time. I hope Future AMD driver versions have this feature built in.
You use nVidia Hardware to
You use nVidia Hardware to measure SOMETHING?
Evilol….
Hey Ryan, great
Hey Ryan, great articles.
Have you tried RadeonPro with Dynamic V-sync Control? It looks they are using a similar technique that Nvidia uses with Adaptive Vsync.
I’m really surprised by this
I’m really surprised by this article. And that is an ambivalent sort of surprise! On one hand I’m pleased to see that SLI 660 Ti’s do so well at 2560×1440. On the other hand I’m dismayed that Crossfire is NOT competitive. So the crux of what I’m feeling is disappointment. Because “competition” is good for me as a consumer.
See I have a pair of Nvidia (specifically EVGA) GTX 660 Ti’s in my computer. And they were chosen because I wanted to run game’s on a 2560×1440 display at full resolution. But it troubles me to think that AMD isn’t being competitive because that points to a Market where Nvidia’s dominance is most likely to mean higher prices. Which contributes to the feeling that PC Gaming my indeed be on its last leg.
The bottom line, economically speaking, is that while computers are a boon to our lives our “life” itself does not depend on them. So when things get really tight we will find ways to get by without spending money on new hardware. And both AMD and Nvidia will be forced to change to maintain a revenue stream.
… so despite the fact I own Nvidia cards I sure hope AMD can get their act together!
I enjoyed this article.
I enjoyed this article. However, after watching all the comparison videos I learned one thing. ATI and Nvidia do not have the same colors. So Out of the box which card is producing the most accurate colors?
Why do the graphs for “FRAPS
Why do the graphs for “FRAPS FPS” and “Observed FPS” match exactly except for Crossfire?!
There should be at least little differences between these graphs.
You already said, that tests with FCAT and FRAPS weren’t made at the same time.
So what would be the explanation for it?
The graphs match exactly.
If you measure the same scene twice, the results wont be exactly the same.
So you let the bench run, measure with FRAPS.
You let the bench run and measure with FCAT.
The results are EXACTLY the same except for crossfire.
There has to be something wrong.
The only thing wrong is the
The only thing wrong is the borked Crossfire. The single GPUs and SLI are doing what they should.
Read. -> Think. ->
Read. -> Think. -> Answer.
The results can’t be exactly the same.
2 Benches, 2 different tools that give different data,
but 1 result? I don’t think so.
Maybe you should re-read what
Maybe you should re-read what has been written or at least look at the definition of observed frame-rate.
Observed Frame Rates will, indeed, must show identical frame rates to FRAPS if all frames are defined as useful. That means there were no dropped or runt frames in the benchmark.
The tools use the same data. Only discarding useless frames would result in a variance. If there are no useless frames then there should not be a variance.
So if you bench the same
So if you bench the same scene two times, the results will be the same? Interesting.
completely no, this called
completely no, this called “blind and randomization” in journal, health journal, social journal, or other
cause we know the same scene never do the same results in two testing in all graphic card
so it do to all graphic card was testing, bad results, good results, all graphic card has same chance to have it
in the end, we called it a fair comparison
And there is your problem.
And there is your problem. The benchmark is only run once.
In the test run FRAPS displays all Frames rendered regardless of size. The video output is then captured via the capture card and the raw AVI file is analyzed. If there are no runts or drops (as is the case with the single GPUs tested) then the Observed Frame Rate must be identical to the FRAPs results.
Wrong. I already asked. Ryans
Wrong. I already asked. Ryans answer:
I am not trolling, I know that AMD has bigger issues with MGPU than Nvidia.
The only thing I am saying:
Somethig has to be wrong with the graphs, as there always will be slight differences between two bench runs.
If that is the case then
If that is the case then there would normally be a slight variation.
I don’t know why it would need to be run separately, though I think the podcast mentioned issues with FRAPS and the overlay being used at the same time.
You’re right, there should be
You’re right, there should be at least slight differences between the two runs. However, that doesn’t mean those differences when displayed on a graph that is made for human consumption are going to be noticeable. In order for the graph’s to be completely accurate, the 90seconds or so of benchmark would have to be sampled at frame. In at least one of the cases that’s about 150fps, which means the graph would have to have a width of about 810,000 pixels. You may have noticed that the graphs on pcper are a bit smaller than that.
You’re right, there should be
You’re right, there should be at least slight differences between the two runs. However, that doesn’t mean those differences when displayed on a graph that is made for human consumption are going to be noticeable. In order for the graph’s to be completely accurate, the 90seconds or so of benchmark would have to be sampled at frame. In at least one of the cases that’s about 150fps, which means the graph would have to have a width of about 810,000 pixels. You may have noticed that the graphs on pcper are a bit smaller than that.
Thanks for the objective and
Thanks for the objective and numberical article despite many critics and subjectivity of the reader
well done ryan
thumbs up pcper.com
I would very much like to
I would very much like to read a quick analysis of previous generations crossfire setup (ie: 2x 5850/5870), to try to understand if these issues are only related to the drivers / GCN arch or the crossfire technology itself.
And of course, AMD should have the ability to discuss these results with pcper.com and eventually revisit these tests with the *future* drivers that will address these issues.
Regardless of the impacts this kind of articles have, it’s known today that there are issues with AMD drivers. Even if there are some errors with some tests, if were not for them we wouldn’t even be looking at this. So.. well done pcper.com, but don’t leave it at this bombshell 😉
cheers.
Thank you for taking your
Thank you for taking your time to do this Ryan. I used to use Crossfire but it never quite felt like what was being displayed. What I mean is when I ran 5770 CF it always seemed jerky and stuttered even though the readings I was seeing seemed pretty high (as it should have been, just ahead of a 5870).
I swore off of Crossfire after that and just concluded it was down to poor drivers. I’m now using GTX 670 in SLI and I have to say it’s been fantastic. No stuttering and pretty smooth. I have had the odd issue waiting for drivers (Far Cry 3) but other than that it seems to work great.
Funny really. Crossfire always felt rather crude to me. Would also love to see some 5xxx results !!
BTW could you also test
BTW could you also test 3dmark11 please?
Result are mostly within
Result are mostly within margin of error.only factor making this possible ?is if the operating system is causing the issue.like I posted in my big post ;the operating system is the cause .probably unknowingly.
Not sure I grasp the whole
Not sure I grasp the whole methodology. If it looks good to me what is the problem? I have (2) 7950’s in CFX mode and I do not see any ‘stutter’ or problem, why should it matter if FRAPS tells me 100 fps in Battlefield 3 and this article tells me its really only 50? Seems to me this is extreme nitpicking of something that isnt even visible to the naked eye and quite irresponsible to make AMD out to be so bad.
Yet someone like Kyle at
Yet someone like Kyle at Hardocp has said in the far cry 3 review that there is no doubt nVidia is smoother, and in the former SLI vs CF, that EVERY SINGLE TESTER AND REVIEWER THERE NOTICES SLI IS MUCH SMOOTHER THAN CROSSFIRE, for the past number of years.
He also says it’s true some people (slow eyes, slow mind, poor vision, crap monitor, low resolution ) “can’t” see the differences.
Oh well, sorry you drew a lousy deck. Some cards drawn are not as good as others.
On the other hand, with them side by side, you’d probably not be a blind, doubting amd fanboy anymore, even if you still claimed you were verbally.
Cant quite call this
Cant quite call this “observed frame rate” if you need a capture card and slow motion video to “observe” it??
Yes, you don’t, thousands of
Yes, you don’t, thousands of people report on it daily.
Others who sold their souls to the amd red devil can’t see it.
<- Me this round for my rig I
<- Me this round for my rig I generally have good luck with Nvidia drivers, okay two mid-range cards, let's see... 192 bit memory bandwidth on your mid-range card?! argh, no thanks I like anti aliasing. Open CL results are... wow okay. Let's check out ATI... Open CL is good, I have a dedicated PhysX card, heat and efficiency are... aw nuts. well okay I can undervolt a bit, fine. Let's get some CF 7950s... one week later and a good deal more broke, sees this article. Yargh... Oh hey... look an Nvidia workstation GPU for gaming(Titan right?), maybe compute tasks will... oh nevermind, not so good and 1k, ouch. No bitcoin mining for me. Maybe I should just get a single GPU? Wait... I just bought three monitors for surround gaming. I think I'll go downstairs and play a console game to relax.
Ryan,
I would like to know if
Ryan,
I would like to know if the ‘runts fps’ are indeed 100% completely useless for the gaming experience. So lets say take a test person put him behind a game running 55fps with ‘runts’ (read: observed fps is lower) and then put him behind the same game but with the fps capped at the observed fps value. I like to know what the test person experiences are under real gaming.
How is the mouse movement? Does It feel smooth? Which scenario does the test person thinks is giving the best gaming experience and so on…
So essentially are the runt frames so annoying that you are better of with way lower fps? If so by what margin? For example I can imagine that 60fps with ‘runts’ is still better than a 100% solid and consistent 30 fps.
Since a normal screen
Since a normal screen resolution means 1080 horizontal scan lines, how do you feel about 7 or 8 scan lines being called a “valuable frame” for AMD ?
Let’s check it with something called logic, since the clueless are here everywhere.
1080/7=
154 runts needed to make a single full screen frame pic JUST ONCE
So in that case, 154 fps give you an effective frame rate of 1 fps. LOL How valuable is 1/154th of the screen in a sliver across it ? That’s a runt, a drop is ZERO VISUAL VALUE period.
How about 21 scan lines ? Is that “valuable” ?
1080/21 = 54 of them to make just one full screen. LOL
So, AMD counts 54fps, and you get one screen of data – yes I’d say that’s a very valuable way for amd to cheat like heck on steroids.
As a competitive gamer, you
As a competitive gamer, you WANT the frames to start before the previous ones finished. Nvidia is introducing latency with their frame metering crap…..
-coming from a 660ti owner.