Testing Suite and Methodology Update
If you have followed our graphics testing at PC Perspective you’ll know about a drastic shift we made in 2012 to support a technology we called Frame Rating. Frame Rating use the direct capture of output from the system into uncompressed video files and FCAT-style scripts to analyze the video to produce statistics including frame rates, frame times, frame time variance and game smoothness.
Readers and listeners might have also heard about the issues surrounding the move to DirectX 12 and UWP (Unified Windows Platform) and how it affected our testing methods. Our benchmarking process depends on a secondary application running in the background on the tested PC that draws colored overlays along the left-hand side of the screen in a repeating pattern to help us measure performance after the fact. The overlay we have been using supported DirectX 9, 10 and 11, but didn’t work with DX12 or UWP games.
We worked with NVIDIA to fix that and we have an overlay that behaves exactly in the same way as before, but it now will let us properly measure performance and smoothness on DX12 and UWP games. This is a big step to maintaining the detailed analytics of game performance that enable us to push both game developers and hardware vendors to perfect their products and create the best possible gaming experiences for consumers.
So, as a result, our testing suite has been upgraded with a new collection of games and tests. Included in this review are the following:
- 3DMark Fire Strike Extreme and Ultra
- Unigine Heaven 4.0
- Dirt Rally (DX11)
- Fallout 4 (DX11)
- Grand Theft Auto V (DX11)
- Hellblade (DX11)
- Hitman (DX12)
- Rise of the Tomb Raider (DX12)
- Sniper Elite 4 (DX11)
- The Witcher 3 (DX11)
We have included racing games, third person, first person, DX11, DX12, and some synthetics, going for a mix that I think encapsulates the gaming market of today and the future as best as possible. Hopefully we can finally end the bickering in comments about not using DX12 titles in our GPU reviews! (Ha, right.)
Our GPU testbed remains unchanged, including an 8-core Haswell-E processor and plenty of memory and storage.
PC Perspective GPU Testbed | |
---|---|
Processor | Intel Core i7-5960X Haswell-E |
Motherboard | ASUS Rampage V Extreme X99 |
Memory | G.Skill Ripjaws 16GB DDR4-3200 |
Storage | OCZ Agility 4 256GB (OS) Adata SP610 500GB (games) |
Power Supply | Corsair AX1500i 1500 watt |
OS | Windows 10 x64 |
Drivers | AMD: 17.8 (Beta) NVIDIA: 384.94 |
In our testing pages the follow, you will find two sets of data. The first set, testing 8 games at 2560×1440 and 3840×2160, will compare the Vega 64, Vega 64 Liquid, GTX 1080, GTX 1080 Ti, and the Vega Frontier Edition. The second set, at the same resolutions and settings, will compare the Vega 56, Vega 64, GTX 1080, GTX 1070, and the R9 Fury X, AMD’s previous flagship GPU.
For those of you that have never read about our Frame Rating capture-based performance analysis system, the following section is for you. If you have, feel free to jump straight into the benchmark action!!
Frame Rating: Our Testing Process
If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using a secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.
This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!
While there are literally dozens of files created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.
The PCPER FRAPS File
Previous example data
While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers. The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph. This will basically emulate the data we have been showing you for the past several years.
The PCPER Observed FPS File
Previous example data
This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are looking at the “observed” average frame rates, shown previously as the blue bars in the RUN file above. This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences.
As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.
The PLOT File
Previous example data
The primary file that is generated from the extracted data is a plot of calculated frame times including runts. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer. A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.
The RUN File
While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result. It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience.
Previous example data
For tests that show no runts or drops, the data is pretty clean. This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.
Previous example data
A test that does have runts and drops will look much different. The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does. Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.
The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen. The larger the area of yellow the more often those runts are appearing.
Finally, the blue line is the measured FPS over each second after removing the runts and drops. We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.
The PERcentile File
Previous example data
Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling. In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS. This will tell you the minimum frame rate that will appear on the screen at any given percent of the time during our benchmark run. The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected.
The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time. A steep decline on the right-hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.
The PCPER Frame Time Variance File
Of all the data we are presenting, this is probably the one that needs the most discussion. In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected. As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels. Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?
We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer. However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT. To be more specific, stutter is only perceived when there is a break from the previous animation frame rates.
Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames. Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter. Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.
Previous example data
While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame Rating system to be reasonably confident in our assertions. So much in fact that I am going to call this data the PCPER ISU, which beer fans will appreciate as the acronym of International Stutter Units.
To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames. There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene. What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.
Would have been interesting
Would have been interesting to see some ryzen based testing as well. Maybe you guys could do a follow up review with that?
How do Vega 56 and Vega 64
How do Vega 56 and Vega 64 compare to Vega Fe in professional workstation tasks? If half the vram don’t crush it two hard, Titan XP performance at half price would be sweet, (for those who aren’t in it for hardcore gaming.)
Take off, ya hosers.
Take off, ya hosers.
What a bunch of sell out’s
What a bunch of sell out’s gold really? Neither card deserves an award. much less gold.
I’m sorry if I missed it but
I’m sorry if I missed it but what was the clock speed of the gtx 1080 and 1070 or were they at stock speed?
From Anandtech’s Vega review:
From Anandtech’s Vega review:
“Connecting the memory controllers to the rest of the GPU – and the various fixed function blocks as well – is AMD’s Infinity Fabric. The company’s home-grown technology for low-latency/low-power/high-bandwidth connections, this replaces Fiji’s unnamed interconnect method. Using the Infinity Fabric on Vega 10 is part of AMD’s efforts to develop a solid fabric and then use it across the company; we’ve already seen IF in use on Ryzen and Threadripper, and overall it’s a lot more visible in AMD’s CPUs than their GPUs. But it’s there, tying everything together.
On a related note, the Infinity Fabric on Vega 10 runs on its own clock domain. It’s tied to neither the GPU clock domain nor the memory clock domain. As a result, it’s not entirely clear how memory overclocking will fare on Vega 10. On AMD’s CPUs a faster IF is needed to carry overclocked memory. But since Vega 10’s IF connects a whole lot of other blocks – and outright adjust the IF’s clockspeed based on the workload need (e.g. video transcoding requires a fast VCE to PCIe link), it’s not as straightforward as just overclocking the HBM2. Though similarly, HBM1 overclocking wasn’t very straightforward either, so Vega 10 is not a great improvement in this regard.”
Interesting so maybe that Infinity Fabric on Vega maybe crossing the same PCIe card and a dual Vega 56 card to take on 4K! More of this GPU based IF technology/IP needs to be looked at for maybe dual RX Vega 56 PCIe card SKUs to come!
Can that IF cross ove GPU dies like it crosses ove CPU dies is the big question to be looked at.
Vega gets its biggest
Vega gets its biggest performance gains from HBM2 overclock not by touching its already high Core clocks especially on the liquid version, so i’m confused has to why you touch the memory speeds?? Thats like 10% performance increase across the board with little impact on power consumption, @#% really??
I think I’ll just stick my 2
I think I’ll just stick my 2 x R9 Nanos.
There doesn’t seem much to gain by going to Vega.
Did you re-test Vega FE or
Did you re-test Vega FE or are you using the results from previous testing?
I guess, the actual question here is whether all the changes made for RX Vega also apply to Vega FE or not?
VEGA is good only in Canada,
VEGA is good only in Canada, Greenland, Norway, Sweeden and Russia, but there is a rumor that VEGA will be banned from global market because of global warming….
That wouldn’t be wise to have
That wouldn’t be wise to have all the Vega cards up north. Might detach a few giant icebergs from the northern glacier with all that excess heat.
Ha ha ha that’s funny but I’m
Ha ha ha that’s funny but I’m all for a consumer GPU Tax to maybe fund those 2 Toshiba/Westinghouse nuclear reactors in Georgia. So lets TAX consumer GPUs worldwide and get some Thorium reactors online and that should offset global warming. Lets make Toshiba/Westinghouse finish the job at break-even costs and use a consumer GPU tax to do so. Tax Toshiba’s NAND production also. Bit coin mining GPUs should be double taxed and any bitcoin farms should be inspected to make sure they are taxed enough to cover the global warming impacts of coin mining.
They could even build Thorium reactors at all the recently closed nuke plants and use the Thorium reactors to burn up all that nuclear spent fuel and generate power in the process until there is no more/little waste remaining that would need to be shipped off site and stored for hundreds of years.
I’d be for a GPU tax you
I’d be for a GPU tax you describe but with one caveat. The company with the least efficiency pays more, the company that makes more efficient cards should be penalized less. A petrol tax of a few cents would probably net more revenue however.
No that GPU tax is paid by
No that GPU tax is paid by the consumer at the point of sale and all consumer GPUs need a global warming tax. No business or professional GPU tax as that’s GPUs used for productive uses. Petrol is already taxed. All gaming/mining GPUs need to be taxed because that usage in not as necssary as say GPU’s used for medical/medicine research, engineering, real productive usage of CPUs/GPUs.
AMD’s Vega GPUs, if they are underclocked can be closer to Nvidia’s effency levels and AMD’s Vega GPUs clocked lower and used for compute workloads can even beat Nvidia in the compute performance/Watt metrics.
So only consumers gaming and coin miners should get to pay the GPU tax, because those usages are not essential and use plenty of power and tax the grid. Add some CPU taxes there also for non professional CPU usage.
You do not tax Companies on any things but net profits and it’s the companies’ Stockholders that get to pay capital gains taxes. Taxing Net profits encourages companies to invest more of the profits back the companies product development and that also creates more jobs.
I also believe in a military draft lottery based on finding those military age gamers with a propensity towards FPS/military games usage where everyone of military service age across the whole population gets a single draft number and a single capsule to be tossed into the military draft lottery barrel.
And then for each FPS/Military game purchased that purchased game nets that gamer an extra required capsule with his’s/her’s/inbetween’s extra draft capsule added to increase that person’s chances of geting some real FPS/Military gaming experience without the chance of respawn if they are taken out!
Special methods should be used to find and reward the FPS/Military high score earners with some extra capsules earned into the draft lottery barrel and an an even greater chance of getting some really high resolution war games experience as the proud property of U-Sam’s Government Issue FPS/Wargaming team.
No that GPU tax is paid by
No that GPU tax is paid by the consumer at the point of sale and all consumer GPUs need a global warming tax. No business or professional GPU tax as that’s GPUs used for productive uses. Petrol is already taxed. All gaming/mining GPUs need to be taxed because that usage in not as necssary as say GPU’s used for medical/medicine research, engineering, real productive usage of CPUs/GPUs.
AMD’s Vega GPUs, if they are underclocked can be closer to Nvidia’s effency levels and AMD’s Vega GPUs clocked lower and used for compute workloads can even beat Nvidia in the compute performance/Watt metrics.
So only consumers gaming and coin miners should get to pay the GPU tax, because those usages are not essential and use plenty of power and tax the grid. Add some CPU taxes there also for non professional CPU usage.
You do not tax Companies on any things but net profits and it’s the companies’ Stockholders that get to pay capital gains taxes. Taxing Net profits encourages companies to invest more of the profits back the companies product development and that also creates more jobs.
I also believe in a military draft lottery based on finding those military age gamers with a propensity towards FPS/military games usage where everyone of military service age across the whole population gets a single draft number and a single capsule to be tossed into the military draft lottery barrel.
And then for each FPS/Military game purchased that purchased game nets that gamer an extra required capsule with his’s/her’s/inbetween’s extra draft capsule added to increase that person’s chances of geting some real FPS/Military gaming experience without the chance of respawn if they are taken out!
Special methods should be used to find and reward the FPS/Military high score earners with some extra capsules earned into the draft lottery barrel and an an even greater chance of getting some really high resolution war games experience as the proud property of U-Sam’s Government Issue FPS/Wargaming team.
remove double post, s p a m
remove double post, s p a m filter be crazy!
I would strongly suggest you
I would strongly suggest you stop blaming me for your mistakes.
Through no fault of you own,
Through no fault of you own, the entire software stack with which respect yours/others websites runs on is buggy from the Drupal, Open Source CMS framework, on down into the call stack into the IE/Other browsers and deeper down into the windows OS software frameworks and that includs the frameworks that any web provided Spam filter(service) is based on(open source and proprietary). And that’s just the nature of the complex software/hardware state machine designs that the entire computing industry is based on from since forever with respect to computing systems and that buggy software state of affaris that can never be proven entirely correct.
That said, I do get your /S.
But it appears that M$ has Over-Tweaked Its windows Base, and derived, Textbox Class Objects and that’s as buggy as hell also by any resonable software standards. So double posts are par for the course and somtimes it’s the failure of the spam filter software stack and sometimes other software stacks especially where web based posting functionality and software framework systems transactional atomics are concerned.
Just try posting on your website a reply to a post on the second or higher page, if your forum posts/replies run into multi-page lengths, and that is buggy with respect to users PC OS systems, and the Drupal/script/PHP/DB/other systems software stacks also.
But thank goodness that the latest M$ cummulative IE 11 security updates has fixed some of the problems with long running ad scripts totally borking the browser(IE11) because that was really a problem for a few months there, and IE11 is not the best way to browse by any stretch on the imagination.
There is no better example of why Linus Torvalds insists on using C and not C++ for the Linux Kernel when one looks at some of the windows OS/frameworks(C++ mostly) and how Buggy they are. Linus Has the right Idea, but even C is not without its issues.
I am the spam filtre, there
I am the spam filtre, there is no automatic filter in place.
Currently “You” are, but not
Currently “You” are, but not always and there is that captcha thingy that freaks out and drupal nurples from time to time also, along with the spam filter service borking.
And NOW we know that you are in fact a Turing complete AI, Jeremy! How’s your brother Max doing! I hear that he went into a VR bar and got so row hammered that he had to be cold booted.
Have been since we moved to
Have been since we moved to Drupal years ago. There can be flakiness with double posts, especially on iThings but that is a different issue.
Max is hanging out with a bad crowd now, Enzo and he are trying to convince Dot to start a video site.
No cards available at
No cards available at launch.
Checked all my usual sites.
Now I have been in work and constantly checked yesterday and today. Couple of times an hour. 5 different sites. In Ireland/UK.
The Pre-Order prices have added extra 100.?
No its not suppose to be any better mining than a fury, so whats going on
I wouldn’t be getting one except I have a wide screen free sync monitor.
Might hook my 1080 up and see what its like and just get another one.
The extra $100 was added by
The extra $100 was added by AMD as rebate was only limited to initial batch of Vega 64 cards. What’s this now AMD is gouging it’s customers. I thought they were the good guys.
Ryan – you guys are killing
Ryan – you guys are killing me with the fonts on these graphs, are they for ants?
Any word on what monitor
Any word on what monitor outputs are on these cards? The only thing that will make me want a new video card is 4k+ resolution with 120+ Hz refresh rate support.
Three displayport 1.4 ports
Three displayport 1.4 ports and 1 HDMI 2.0. Supposedly you can have 120hz support at 4k with DP 1.4.
Vega interest me alot from
Vega interest me alot from the technical side, how much of this translates into additional performance i dont know but it will impact it. Currently so many features of vega arnt enabled, and alot of the stuff even when enabled will do diddly squat for most games out now.
But off the top of my head things not enabled are HBCC, that could have a huge effect not on performance exactly but 512tb of vram is just mind blown.
Primitive shaders, that could be a massive pump in performance.
Rapid packed math, which is basically black magic to me but as i understand it thats basically hyperthreading for a gpu, if thats wrong please correct me.
FP16 packed math hasn’t
FP16 packed math hasn’t anything to do with hyperthreading, it’s just combining two 16 bit operation on a single FP32 ALU dubling performance at the cost of reduced precision but is not a magic bullet, there was a reason if the industry moved from FP16 to FP32 15years ago, 16bit aren’t enough most of the graphics
A GOLD award for vega? You
A GOLD award for vega? You are joking right?? It’s a piece of hot shit, a total technological failure, a regression in almost every possible way. More than 2 generations behind the competition now and priced like garbage. You guys are tarnishing your reputation with that nonsense. AMD doesn’t need your charity awards.
Go read that techgage article
Go read that techgage article linked to in many posts on this forum thread! Vega 64 is a compute monster for thousands less than that Quadro P6000-24GB! Even the Titan XP falls to Vega on some workstation compute workloads.
AMD’s stockholders Know that Vega is a winner in that professional compute/AI market that counts more for some higher margin revenues than any gaming only market will produce. And Nvidia’s JHH Knows this also about compute/AI markets!
Vega 64 is right up there with the GTX 1080 in gaming in spite of that extra compute, ditto for the Vega 56 with a little more compute stripped out but still enough ROP/TMU resources to closely match the GTX 1080’s ROP/TMU resources. So that Vega 56 will beat the GTX 1070 and most likely be overclocked to get nearer to the GTX 1080 in gaming performance metrics.
And the miners and pro markets love all the extra compute that the Vega 10 GPU micro-arch can spare, and that’s money in AMD’s bank, same as any gaming only money/revenues!
Money Be Money Sonny!
You mean many post made by
You mean many post made by the same guy with different name? 🙂
Compute monster… really? Vega has the same theoretical compute power of GP102 but in practise due to their lower compute core occupancy it won’t even match it
A gold award for matching
A gold award for matching yesteryears performance? Wow PCPer… what happened to you guys? *sigh* I remember when you used to have integrity.
see GinormousDaftsNot’s reply
see GinormousDaftsNot’s reply to Anonymousdfdf3!
Go read that techgage article
Go read that techgage article linked to in many posts on this forum thread!
So they awarded the gold for someone else’s article?
Vega 64 is a compute monster for thousands less than that Quadro P6000-24GB! Even the Titan XP falls to Vega on some workstation compute workloads.
Wasn’t the Frontier Edition made for that purpose? This is a gaming card you putz. Nobody gives a crap about compute in gaming cards – they should be judged for their GAMING performance. Goddamn intellectually bankrupt shills.
Vega is ok. A Custom cooled
Vega is ok. A Custom cooled over clocked
1070 or 1080 will easily beat Vega out of the box and use less power. The higher production costs might limit availability.
see GinormousDaftsNot’s reply
see GinormousDaftsNot’s reply to Anonymousdfdf3!
Why does this review show
Why does this review show such a large difference between RX Vega 64 and Vega Frontier Edition when this Witcher 3 video shows that they’re essentially identical?
https://www.youtube.com/watch?v=RNXGr-8jcnE
I just picked one up in
I just picked one up in Akihabara.
Winter is coming. This will power my games and make a good space heater.
Contact:
Contact: drocusodospellcaster@gmail.com for Urgent lottery winner Fast, VERY POWERFUL:100% GUARANTEED RESULTS
I want to use this opportunity to thank Dr.OCUSODO for helping me to win the lottery.I have been playing lottery for the past 8 years now and the only big money I have ever won was 800$ ever since then I have not been able to win again and I was so upset and I need help to win the lottery so I decided to go to a friend of mine call Robert,and he introduce me to Dr. OCUSODO, there I saw so many good talk about this man called Dr. OCUSODO of how he have cast spell for people to win the lottery.I contact him also and I tell him I want to win a lottery, he cast a spell for me which I use and I play and won 80million GBP. I am so grateful to this man just in case you also need him to help you win, you can contact him through his email: drocusodospellcaster@gmail.com is my part of promise I made, that if I win I tell the word how I win my game. drocusodospellcaster@yahoo.com or also call him or what-app +2349067457724.