Testing Setup
Testing Configuration
The specifications for our testing system haven't changed much.
Test System Setup | |
CPU | Intel Core i7-3960X Sandy Bridge-E |
Motherboard | ASUS P9X79 Deluxe |
Memory | Corsair Dominator DDR3-1600 16GB |
Hard Drive | OCZ Agility 4 256GB SSD |
Sound Card | On-board |
Graphics Card | MSI GeForce GTX 970 4GB Gaming NVIDIA GeForce GTX 970 4GB AMD Radeon R9 290X 4GB AMD Radeon R9 290 4GB |
Graphics Drivers | AMD: 14.11 NVIDIA: 344.75 |
Power Supply | Corsair AX1200i |
Operating System | Windows 8 Pro x64 |
What you should be watching for
- MSI GTX 970 Gaming vs GTX 970 Reference – How much faster is the custom cooled MSI GTX 970 Gaming graphics card compared to the first round of GTX 970s running at reference speeds?
- MSI GTX 970 Gaming vs Radeon R9 290X and R9 290 – Okay, so the real battle lies here – how much performance difference is there between the GTX 970 Gaming and the two primary high-end GPU options from AMD?
If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!
Frame Rating: Our Testing Process
If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.
This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!
While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.
If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!
The PCPER FRAPS File
While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers. The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph. This will basically emulate the data we have been showing you for the past several years.
The PCPER Observed FPS File
This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above. This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences.
As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.
The PLOT File
The primary file that is generated from the extracted data is a plot of calculated frame times including runts. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer. A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.
The RUN File
While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result. It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience.
For tests that show no runts or drops, the data is pretty clean. This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.
A test that does have runts and drops will look much different. The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does. Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.
The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen. The larger the area of yellow the more often those runts are appearing.
Finally, the blue line is the measured FPS over each second after removing the runts and drops. We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.
The PERcentile File
Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling. In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS. This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run. The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected.
The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time. A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.
The PCPER Frame Time Variance File
Of all the data we are presenting, this is probably the one that needs the most discussion. In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected. As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels. Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?
We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer. However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT. To be more specific, stutter is only perceived when there is a break from the previous animation frame rates.
Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames. Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter. Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.
While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions. So much in fact that I am going to going this data the PCPER ISU, which beer fans will appreciate the acronym of International Stutter Units.
To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames. There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene. What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.
zzzzzzzzzzzzzzzzzzzzzzzzzzzzz
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
So this is probably a dumb
So this is probably a dumb question, but why is all the testing at 1440p? At this level of GPU, do they not show a lot of difference at 1080p? If so, does this mean that if you are gaming at 1080p still that you can’t go wrong with any of these cards?
My opinion – yes – a single
My opinion – yes – a single 970 or 980 pretty much conquers everything (with a fast enough CPU) at 1080p. The only exception being the Oculus rift DK2 (1920×1080) and the 75 FPS *minimum* you want to maintain. Even there, it’ll do it for 98% of 3D games, but a few like Elite Dangerous will require low settings to perform consistently.
On the flip side, 4K gaming is a weird spot for these cards as some games are already running into VRAM issues @ 4K with 4 GB cards. Star Citizen will even use 3.5 GB at 2560×1600 (my Radeon 7970 3 GB fell flat on it’s face due to VRAM usage at x1600). SC is CryEngine based..
My msi 970 doesn’t conquer
My msi 970 doesn’t conquer anything to say this is absolutely not true can you honestly say no matter what game or bench you throw at a 970 with whatever settings its going to be a solid 60fps well believe me its not… how about euro truck sim 2 @ 80p dropping to 40fps although this is an actually driver problem dont get carried away saying it will destoy anything at 80p it wont..
Euro Truck Sim 2?!@? Cmon
Euro Truck Sim 2?!@? Cmon man…
yeah its called a driver
yeah its called a driver problem, and i have just been playing planet side 2 with the exact same overclock as what ryan achieved and its not on max settings because it will drop below 60fps don’t try and pretend 80p is delt with..
Yeah, sorry, I should have
Yeah, sorry, I should have spelled that out more. At 1080p, this card is MORE than powerful enough for just about any scenario and could even be considered overkill for many users. We tested 2560×1440 (and sometimes at 4K) for hardware that really needs that kind of resolution to show differences.
Even Ubisoft crap ports ?
Even Ubisoft crap ports ?
I hope you must have realized
I hope you must have realized that how much money you can save by subscribing to any such website.
They will be having their wedding and of course
they will have a sweet honeymoon where Edward will say,
“Last night was the best night of life. I also like the scene where Cheryl confesses to Gracie that she was attacked, which prompts Gracie to teach Cheryl self-defense.
Hey Ryan. Big fan
Hey Ryan. Big fan here!
Wasn’t the whole “$400+ GPU is too much for 1080p” put to bed with the introduction of high refresh monitors?
I don’t think any GPU is overkill at any resolution if your intention is too keep the GPU(s) for a while. I’d rather have 100fps, than just enough.
Because 1080p is not an issue
Because 1080p is not an issue anymore… its done, conquered for all future releases too. Midrange cards are fully capable to rape todays games on 1080p. Entry level cards even push solid 60 for all new games on med/high.
1080p is now, what 720p was to you before you read this.
If you think its not conquered, just because some idiot turned up 32xAA in crysis3, or enabled all the broken sun ray effects in stalker clear sky – then screw that guy, he clearly is an imbecile.
Rape? Seriously?
Rape? Seriously?
Seriously? Seriously?
Seriously? Seriously?
Ryan, I have a request: A
Ryan, I have a request: A full review dedicated to comparing overclocked flagship cards. GTX 970 vs GTX 980 vs R9 290x vs R9 290. OC them all as high as possible and then go through all the same testing. Realistically I’d prefer to see this all done with results of both single card and SLI/CF so we can see temps on the top card. Realistically i’d like to see these results with both 2560×1440 & 4k. I realize that with both single and CF that would be a lot of data…but yes a lot of data would be great to see!
I think you also used to do a bang/buck type comparison on your reviews. Basically how much do you get for what you paid for. A review like what i’m requesting seems to be a great place to have bang/buck comparisons.
Thanks.
The biggest problem with
The biggest problem with showing temps is that everyone house is different, an people live in different places, different PC setups, etc. So the only way Ryan can show off true temps of card if he was using a temp controlled room an that cost alot of money.
For me, i just skip all the BS about card temps for this very reason.
Well could get away with just
Well could get away with just a small caption says “Using open air test bench and ambient air temp of xx degree’s”. that would pretty much eliminate that problem.
Are my eyes playing tricks on
Are my eyes playing tricks on me? Or are you slipping with no anti-AMD drivel?
All temp readings needs to
All temp readings needs to have that kind disclaimer on them.
The reason we haven't included temps (which I probably still should) is because temps are regulated by the cooler to not go over a set level. For example, even when overclocked, the GTX 970 Gaming tested here runs at 80C. To compensate, the fan just spins as fast as necessary to keep it cool.
All temp readings needs to
All temp readings needs to have that kind disclaimer on them.
The reason we haven't included temps (which I probably still should) is because temps are regulated by the cooler to not go over a set level. For example, even when overclocked, the GTX 970 Gaming tested here runs at 80C. To compensate, the fan just spins as fast as necessary to keep it cool.
i still think a 1080P
i still think a 1080P comparison should be done anyways. not everyone out there doesn’t have 1140p screens. i know it takes more time but for us the consumer would like to know what the majority of people are using for gaming and thats 1080P really.
I’m confused by the sound
I’m confused by the sound levels of the MSI GeForce GTX 970 4GB Gaming. You say it is 30.9 dbA at idle, but it is supposed to be semi passive cooling as long as the card is below 60c and the fans don’t run. How are you recording 30.9 dbA if the card is idling? Is your case or room especially warm?
I’d like to know because I want to buy this card for a HTPC and have it silent when not gaming and only using XBMC.
Thanks for writing up this
Thanks for writing up this review 🙂 I really think 1080p results should be included for us mainstream gamer/users. EVGA is kinda pricey and the cooler is just ok if you ask me. I got the MSI gtx 970 gamer oc version and love it! I’ll have to say the GTX 970 is nice now when it comes to maximum details/performance for 1080p.
Hi, I’ve just installed a msi
Hi, I’ve just installed a msi gtx970 to replace my gtx670sli. here are the benchmark comparatives made in 3dmark. Interesting to see on many points actually.
but there’s a mystery that remains to be solved. If you take a look to the cloud gate test result for the gtx970, you’ll notice a massive drop in the physics score which is not consistent with all the other results – by so far its clear there is something going on there.
http://www.3dmark.com/3dm/5572125
I’ve tried so far:
uninstalling then cleaning with clean sweep and reinstalling graphic drivers
reinstalling 3dmark
playing with NVidia 3dsettings in config panel
on 3dmark FAQ:
seems that ‘cloud gate’ is testing directx10 functions -through directx11
on the web: I cant find anybody who reported this so far…
here are the two pages for
here are the two pages for comparatives:
gtx970: http://www.3dmark.com/3dm/5572125
gtx 670sli: http://www.3dmark.com/3dm/2635825
I have switched RAM for
I have switched RAM for TridentX ddr3-2400 16gig and it didn’t change a thing. Same results.
I am trying to find out how
I am trying to find out how much power i am pulling with 2 gtx 970 at a 5760×1080 trying to keep a min of 75 fps and are they able to do it