Testing Suite and Methodology Update
If you have followed our graphics testing at PC Perspective you’ll know about a drastic shift we made in 2012 to support a technology we called Frame Rating. Frame Rating use the direct capture of output from the system into uncompressed video files and FCAT-style scripts to analyze the video to produce statistics including frame rates, frame times, frame time variance and game smoothness.
Readers and listeners might have also heard about the issues surrounding the move to DirectX 12 and UWP (Unified Windows Platform) and how it affected our testing methods. Our benchmarking process depends on a secondary application running in the background on the tested PC that draws colored overlays along the left hand side of the screen in a repeating pattern to help us measure performance after the fact. The overlay we have been using supported DirectX 9, 10 and 11, but didn’t work with DX12 or UWP games.
We’ve been working with NVIDIA to fix that, and I can report that we now have an overlay that behaves exactly in the same way as before, but it now will let us properly measure performance and smoothness on DX12 and UWP games! This is a big step to maintaining the detailed analytics of game performance that enable us to push both game developers and hardware vendors to perfect their products and create the best possible gaming experiences for consumers.
So, as a result, our testing suite has been upgraded with a brand new collection of games and tests. Included in this review are the following:
- 3DMark Fire Strike Extreme and Ultra
- Unigine Heaven 4.0
- Dirt Rally (DX11)
- Fallout 4 (DX11)
- Gears of War Ultimate Edition (DX12/UWP)
- Grand Theft Auto V (DX11)
- Hitman (DX12)
- Rise of the Tomb Raider (DX12)
- The Witcher 3 (DX11)
We have included racing games, third person, first person, DX11, DX12, UWP and some synthetics, going for a mix that I think encapsulates the gaming market of today and the future as best as possible. Hopefully we can finally end the bickering in comments about not using DX12 titles in our GPU reviews! (Ha, right.)
Our GPU testbed remains the same since our update late in 2015, including an 8-core Haswell-E processor and plenty of memory and storage.
PC Perspective GPU Testbed | |
---|---|
Processor | Intel Core i7-5960X Haswell-E |
Motherboard | ASUS Rampage V Extreme X99 |
Memory | G.Skill Ripjaws 16GB DDR4-3200 |
Storage | OCZ Agility 4 256GB (OS) Adata SP610 500GB (games) |
Power Supply | Corsair AX1500i 1500 watt |
OS | Windows 10 x64 |
Drivers | AMD: Crimson 16.6.2 NVIDIA: 368.39 |
- Radeon RX 480 8GB – $239
- Radeon R9 390 8GB – $279
- Radeon R9 380 4GB – $179
- GeForce GTX 970 4GB – $279
- GeForce GTX 960 2GB – $179
With all of the new cards coming in and out of the market this summer, prices are going to be in flux for quite some time. The above listings were accurate as of this writing and were the basis for our benchmarking and analysis.
For those of you that have never read about our Frame Rating capture-based performance analysis system, the following section is for you. If you have, feel free to jump straight into the benchmark action!!
Frame Rating: Our Testing Process
If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.
This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!
While there are literally dozens of files created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.
The PCPER FRAPS File
Previous example data
While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers. The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph. This will basically emulate the data we have been showing you for the past several years.
The PCPER Observed FPS File
Previous example data
This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above. This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences.
As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.
The PLOT File
Previous example data
The primary file that is generated from the extracted data is a plot of calculated frame times including runts. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer. A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.
The RUN File
While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result. It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience.
Previous example data
For tests that show no runts or drops, the data is pretty clean. This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.
Previous example data
A test that does have runts and drops will look much different. The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does. Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.
The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen. The larger the area of yellow the more often those runts are appearing.
Finally, the blue line is the measured FPS over each second after removing the runts and drops. We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.
The PERcentile File
Previous example data
Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling. In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS. This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run. The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected.
The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time. A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.
The PCPER Frame Time Variance File
Of all the data we are presenting, this is probably the one that needs the most discussion. In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected. As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels. Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?
We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer. However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT. To be more specific, stutter is only perceived when there is a break from the previous animation frame rates.
Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames. Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter. Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.
Previous example data
While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions. So much in fact that I am going to call this data the PCPER ISU, which beer fans will appreciate as the acronym of International Stutter Units.
To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames. There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene. What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.
A couple of points about the
A couple of points about the 970 OC vs 480 OC arguement.
The AIB cards coming , and this is coming from Kyle Bennett at H , will comfortably clock between 1490-1600mhz range on the core. 1600 been a golden sample. 1500mhz+ been very common.
That’s 970/980 level max oc. So the $$ are yet to be seen for the better coolers etc but AIB 480s are going be a lot better than a 970 is , aib vs aib even, as the 970 loses now in dx11 and badly in dx12.
Even in this review it shows the boost clock is throttled and under the max boost set by quite a margin. Its mainly due to the cooler and power limits on the bios i would think.
TDP is reduced with better cooling, so it will help a little. I don’t expect better than 970 power consumption after OC however , which is way higher than pascal which is a bummer. So what you are basically getting here with an AIB card is 8gb ram vs 4 (3.5), much better performance (particularly in dx12) and reportedly very good overclocking.
480 AIBs will be out before 1060 is also.
>480 AIBs will be out before
>480 AIBs will be out before 1060 is also.
Lmao.
There will be no stock of ref 480 before 1060.
For a perfect launch amd needed 100k cards, not 10k.
Aib cards – did you mean aib cards with custom coolers? There are only 2 tease atm, msi/asus and 0 info about specs and availability.
No stock , lol i can buy them
No stock , lol i can buy them now even in NZ. Not any cheaper than a 970 mind you here, but the value is better on the 480 still.
You missed one, leaks on Saphire Nitro as well.
I’m 100% sure we will see AIB 480s before 1060 .. its a paper launch 1060 on the 7th for sure, wait till end of month at least before you see ref 1060 at the earliest. Most reports on the AIBs is 2 wks, so mid July , maybe sooner. And if they have anything like the 1070/1080 shortages then yeah its an utter fail for NV.
So you will have AIBs 480s up against the ref 1060 on launch day for the 1060 is my pick, so could be quite a battle for that performance level.
Still good to see the RX480s are selling in large numbers even ref models.
I still hear AMD’s vice
I still hear AMD’s vice president and product CTO Joe Macri words:
“You’ll be able to overclock this thing like no tomorrow,” “This is an overclocker’s dream.”…
Some might stay up half the
Some might stay up half the night fiddling with the WattManager trying to optimize power use, then have nightmares… on either company’s card. Hopefully someone will share good tips as they get better at it.
The reference boards that you
The reference boards that you can buy now, are optimised for one thing: low price.
When the AIB partners start to ship their kit with better power delivery and better coolers, we will see 😀
“AMD is only able to run the
“AMD is only able to run the Polaris GPUs at 1266 MHz while the GTX 1080 hits a 1733 MHz Boost clock, and difference of 36%.”
This “review” needs a complete rewrite…;) Comparing the $600-$700 GTX 1080 with the $239 RX 480 is idiotic–but I see that doesn’t stop you from doing it…;) AMD hasn’t released its competitor to the 1080 yet…it’s called Vega and should be released around Christmas.
But, you could of course buy *two* 8GB RX 480’s for X-fire if you were of a mind to, and have comparable performance, 2x the Vram for d3d12 games, and still be ~$200 less than a single 1080, etc. But if that’s the kind of power you want you do better to wait for Vega, imo.
I think the comment was just
I think the comment was just comparing the clocks, not the performance. I think there is an expectation that with the same or similar process nodes, AMD should have been able to match Nvidia on clocks but they can’t.
This can either mean Global foundries 14nm sucks, their engineers couldn’t make an efficient architecture, or to keep costs low they are allowing low grade chips through to get volume.
Considering clocks are the way consumers get a boost from their cards for free, it’s pretty disappointing how little you can clock these to and how much heat they produce in the process.
The comment makes perfect
The comment makes perfect sense. They simply questioned what the deal was with the frequency being much lower than NVidia’s. Pretty simple to me…
AMD has already explained this at PCPER. They started the GPU almost three years ago and were targeting mobile first so the design was optimized for lower frequencies. They switched to get a VR READY product available so were forced to raise the frequencies which is why it barely beats the GTX970 (which is the minimum VR READY GPU).
Buy two RX-480’s for Crossfire and have comparable performance? Multi-GPU is not really the way to go yet. It’s going to improve though it will take about two years or so to get proper support into game engines.
And considering the GTX1080 averages 85% faster than the RX-480 Crossfire makes even less sense->
https://www.techpowerup.com/reviews/AMD/RX_480/24.html
If we use $650 for the GTX1080, and $275 for the RX-480 8GB which I think will be realistic once prices stabilize, then there is a $100 difference in favor of a solution that may barely bet the GTX1080 at times and other times have the GTX1080 up to 2X as fast in some titles.
In fact, we don’t even know if AMD has the single-pass optimization for VR that can increase FPS by 1.6X. If not, and we use 85% then the GTX1080 will be 3X faster in some titles (or have much better visuals as both should have the same 90Hz).
Hi guys, can u please explain
Hi guys, can u please explain how is RX 480 is win for AMD?
Cause, GTX 1070 consume less Watts as compared to RX 480 (150W vs 150W), but gives significantly more performance even though 1070 is expensive…..
And if you look at another way GTX 970 also consume less or about the same Watts as compared to RX 480 and performs about the same…..
So, now my question is ” did AMD didn’t improve there architecture at all (like what NVIDIA did with MAXWELL), and only tool advantage of 14nm node OR I’m missing something??? ”
And please this is not fanboy or flame war questions, this is genuine concern of mine!!!
And thanks for the reply 🙂 🙂 🙂
It’s not an architecture win
It’s not an architecture win that’s for sure. Nvidia has that without question.
It is a market win since it’s a cheap card with good performance. The question for AMD is whether these are really profitable cards.
I came to the same conclusion
I came to the same conclusion 🙂
They improved their own power
They improved their own power metrics vs 28nm generation hugely – not so much against NV. You would think by the way some people go on here about power consumption that it matters more than features / gaming performance and value.
Its of very low importance on my own scale of important metrics. Still i guess when you can’t win Perf/$$ or best in segment points , that’s all you have to make a negative point about.
I guess some will pay more for the 1060 to have better perf/watt then spend years catching that value difference up in their power bill lol.
Waiting for the arguements in the next gen over 10 watts differences .. every generation that goes by this will matter and less and less than it already does now.
folks are tools and do not
folks are tools and do not see the whole story. RX 480 has ALOT more under the hood then comparable Radeons or Geforce cards, they are using very top of the line very new power circuitry on every CU which they have not had a chance to optimize for EVERY scenario, drivers for them are NOT optimized etc etc.
Why are “supposedly” GTX970 drawing less power but performing faster in Nvidia biased titles, I WONDER WHY, maybe cause it is Nvidia biased, maybe because the 970 is a much older card they have had time to optimize for, maybe the power circuitry that Nvidia used/uses is older so more “known” etc
RX 480 is a terrific design, I was looking at Radeon R9 380-380x about a year ago give or take and knew this was around the corner, so waited, I currently use a 7870 having owned 2 of them and kept the “better” one, this is costing ~$100+ less then my 7870s did, is using ~40w less not counting overclocking etc and also ~2-2.5x faster with double to quadruple the amount of memory, IMO that is a massive nice jump.
Keep in mind ALL electrical anything can have spikes unless you put a limiter on them, and in the case of high tech, these limiters can cause instant crashes if you are to severe with their limitations via cutin and cutout of power, obviously we do not want this, so there will be “play” in when the chip decides to regulate or not.
Anyways point is, to just base information on one or 2 points is BS when these things are amazingly complex, and while the GTX 1070/1080 are “faster” then 900 series they also chopped away some more and ramped up clock speeds to “make them fast” not as much parts to power, not as much power required, run them with lower clock speeds see how much performance they lose, give them multi-gpu capability via DX12 oh wait they chopped that away as well.
Long story short, I know myself and many others are quite happy with the results we see here from RX 480 considering the performance/power/price they have delivered and we know they WILL get better in time, not held back by proprietary BS Nvidia does and not suffer driver performance castration like Nv has done for decades now, it takes time to fine tune things like this, especially when your development team is MUCH smaller then direct competition, and if we were to go by what all Nv fanboys act, Radeon would not be competitive at all for decades, and that is simply far from truth its just disgusting
I know of one thing, Radeons have not used underspec components, massive raw amperage intentionally shortening component life, and did not chope things away and put limiters in place to hold back performance on purpose and still overcharge, can you say that about Nv, nope.
Anyways, to one above me, they didnt improve architecture like Nv did with maxwell ROFL, go do some reading instead of trollbaiting, and you will see Radeon did a great deal of optimization/improving with Polaris compared to what Nvidia did with Pascal which by all intents and purposes is just a highly overclocked Maxwell(which itself was more or less an overclocked/optimized Kepler)
Your comment about frequency
Your comment about frequency is meaningless to most people. BENCHMARKS are what matter most. AMD could not get higher frequencies because the GPU was intended for mobile so couldn’t overclock well (their words, not mine).
Who cares WHY NVidia is faster? They are, so I’ll buy their product.
I am recommending an after-market RX-480 to those people in that budget however. I’ll rethink that when NVidia has competition with a Polaris GPU.
Above $200 I only recommend the RX-480, GTX1070, or GTX1080 and none until prices drop.
edit: pascal, not polaris.
edit: pascal, not polaris.
Welcome to September 2014
Welcome to September 2014 AMD. Glad you finally caught up with an almost two year old nVidia product on a 28nm process.
This Card isn’t a
This Card isn’t a revolutionary rainbowpoopin Wondercard, but a step up for AMD. It feels a bit like AMD is still one step behind nVidia,
but at least they are still chasing it in some areas.
The only concern i have is that the market is already grassed up by nVidia’s 970. I mean look at the Steam Presence of this card.
But also they may enough folks who haven’t upgraded to this performance level yet and for them the 480 really is a no brainer in my opinion.
I have a feeling with the 8GB and DX12 capabilities it will age much better then the 970 and that it don’t had shown it’s full potential yet.
(There are still major Driver issues with GTA V and with the Powerstates in Idle, leading to about 7 Watt more powerdraw than it should.)
Some nVidia biased guys on youtube benchmarked it against an overclocked custom 970 while the stock 480 wasn’t overclocked at all.
Im sure some “Greenhorns” already have seen those Benchmarks as the proof of superiority of nVidia over AMD. 8)
But anybody who argued about the 970 have the same or similar pricepoint now most likely forgot that the new 970 price only came to existence because the upcoming release of the RX 480.
(Also an GTX 1060 might very surely influenced by it, price AND Performance wise.)
Also it will be more future proof with it’s 8GB, DX12 and modern Display Adapter Support.
I for myself was a bit disappointed by the benchmarks after all the hype, but if you rethink it, it still is a very good card overall.
So i decided to upgrade from my R9 280 that is with OC about 25% less powerful then the 480.
Mainly because of the newer tech inside, because i need freesnyc and a bit more performance now for my new 34″ UWQHD Curved Monitor.
And i rather play with less details till Vega than buy a more expensive Card from nVidia that refuse to support freesync.
Otherwise i may had even bought the 1070 for a about 65% higher price.
With seeing what other sites
With seeing what other sites are showign of the power draw for this card. I would be very hesitant to get one. AMD seems to be really pushing the limits by only having one connector.
Really like this detailed
Really like this detailed reviews
Question coming from older
Question coming from older hardware still running HD7970 Ghz edition, will it run good with I3-3220 so i can carry it around for another few months-year till i can save up for upgrades? my game collection mostly consist of DX9-11 titles so dx12 is not any of my concern atm.
Not sure if it will run well
Not sure if it will run well with your i3 processor as most sites only bench with latest Skylakes or extreme Intel processor. Nvidia seem to get more frames when game is CPU limited or directx 11 in general as well as 1080 resolution as processor power is more relevant. The power draw over PCI express spec for RX 480 might be a concern for your older hardware. If you can wait a month or so prices will be coming down because of saturation of new cards by AMD and Nvidia. If you want a stop gap card a Maxwell or even Fury or 300 series AMD cards should start being discounted soon. Don’t buy this at full retail as resale value may be poor later and if you’re getting by now keep saving.
I think its a good strategy
I think its a good strategy from AMD to capture the 85% GPU market in that price bracket to win some dGPU market share from Nvidia. If not overclocked it keeps the powerdraw down and people can play all their games at 1080p at full settings fine on budget to mid end pc’s.
Lets be honest here that is where MOST PC Gamers are at with pc specs and 1080p displays so good move by AMD!
Why does GTA V show total
Why does GTA V show total vram as 6gb? Any explanation for that?
I honestly haven’t been this
I honestly haven’t been this excited for the future of computer graphics in a while. Already ordered a 480!
480 mem spec is wrong its
480 mem spec is wrong its 256GB/s and whats up with 5.1tflops? most other sites report 5.8tflops or something
256 for 8gb, 224 for 4gb
Does Wattman support older
Does Wattman support older radeon series like r9 295×2?
How absolutely ridiculous is
How absolutely ridiculous is our society where we purposely disable stuff that works perfectly to achieve lower value..
Capitalism is absurd.
Small correction on the first
Small correction on the first page. R9 Nano is 175W. Almost as close as the Rx480.
To be honest if Nano droped in price it would be more interesitng product then the rx480 by a nice margin…
Do wonder when they will adjust the prices as currently R9 390(x) and R9 Fury/Nano/FuryX make no sence in terms of price performance against Nvidia.. 🙁
Come on AMD drop the prices.. I want the Nano 🙂
I would return the card
I would return the card before it passes retailer warranty period. This power issue is turning serious. You can wait until either AMD rectifies the problem or partner fixes with 8pin connector and power delivery.
Quality control may have been overlooked to meet demand on a seemingly rushed card. Gotta love AMD Robert’s response. It only effects a few out of the hundreds of reviews but failing to mention they didn’t test the same way as those that found the issue. I’d call that trivializing and spinning at it’s finest.
Maybe the bad cards have poor ASIC quality and shouldn’t have been released in the first place.
Poor AMD. Maybe it was part of their master plan to fry your motherboard so that you can just upgrade to the shortly coming Zen with shiny new motherboard required.
PS I’m just kidding about the master plan. Lighten up
This website is very good
This website is very good site.
@Ryan: if I grab the 480 8 GB
@Ryan: if I grab the 480 8 GB . is there a way to force my w1064 bits to use the vram as its main . Ms as an annoying tendency to favor ram(at all cost)128 MB of ram ddr3 ? Would this force OS to use go 480 vram instead or gamer got to beg till we re omln our death bed