Testing Suite and Methodology Update
If you have followed our graphics testing at PC Perspective you’ll know about a drastic shift we made in 2012 to support a technology we called Frame Rating. Frame Rating use the direct capture of output from the system into uncompressed video files and FCAT-style scripts to analyze the video to produce statistics including frame rates, frame times, frame time variance and game smoothness.
Readers and listeners might have also heard about the issues surrounding the move to DirectX 12 and UWP (Unified Windows Platform) and how it affected our testing methods. Our benchmarking process depends on a secondary application running in the background on the tested PC that draws colored overlays along the left-hand side of the screen in a repeating pattern to help us measure performance after the fact. The overlay we have been using supported DirectX 9, 10 and 11, but didn’t work with DX12 or UWP games.
We worked with NVIDIA to fix that and we have an overlay that behaves exactly in the same way as before, but it now will let us properly measure performance and smoothness on DX12 and UWP games. This is a big step to maintaining the detailed analytics of game performance that enable us to push both game developers and hardware vendors to perfect their products and create the best possible gaming experiences for consumers.
So, as a result, our testing suite has been upgraded with a new collection of games and tests. Included in this review are the following:
- 3DMark Fire Strike Extreme and Ultra
- Unigine Heaven 4.0
- Dirt Rally (DX11)
- Fallout 4 (DX11)
- Grand Theft Auto V (DX11)
- Hellblade (DX11)
- Hitman (DX12)
- Rise of the Tomb Raider (DX12)
- Sniper Elite 4 (DX11)
- The Witcher 3 (DX11)
We have included racing games, third person, first person, DX11, DX12, and some synthetics, going for a mix that I think encapsulates the gaming market of today and the future as best as possible. Hopefully we can finally end the bickering in comments about not using DX12 titles in our GPU reviews! (Ha, right.)
Our GPU testbed remains unchanged, including an 8-core Haswell-E processor and plenty of memory and storage.
PC Perspective GPU Testbed | |
---|---|
Processor | Intel Core i7-5960X Haswell-E |
Motherboard | ASUS Rampage V Extreme X99 |
Memory | G.Skill Ripjaws 16GB DDR4-3200 |
Storage | OCZ Agility 4 256GB (OS) Adata SP610 500GB (games) |
Power Supply | Corsair AX1500i 1500 watt |
OS | Windows 10 x64 |
Drivers | AMD: 17.10.2 NVIDIA: 388.09 |
For those of you that have never read about our Frame Rating capture-based performance analysis system, the following section is for you. If you have, feel free to jump straight into the benchmark action!!
Frame Rating: Our Testing Process
If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online. Rather than using FRAPS to measure frame rates or frame times, we are using a secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more.
This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options. So please, read up on the full discussion about our Frame Rating methods before moving forward!!
While there are literally dozens of files created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own.
The PCPER FRAPS File
Previous example data
While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers. The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph. This will basically emulate the data we have been showing you for the past several years.
The PCPER Observed FPS File
Previous example data
This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are looking at the “observed” average frame rates, shown previously as the blue bars in the RUN file above. This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences.
As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.
The PLOT File
Previous example data
The primary file that is generated from the extracted data is a plot of calculated frame times including runts. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer. A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.
The RUN File
While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result. It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience.
Previous example data
For tests that show no runts or drops, the data is pretty clean. This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.
Previous example data
A test that does have runts and drops will look much different. The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does. Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.
The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen. The larger the area of yellow the more often those runts are appearing.
Finally, the blue line is the measured FPS over each second after removing the runts and drops. We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.
The PERcentile File
Previous example data
Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling. In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS. This will tell you the minimum frame rate that will appear on the screen at any given percent of the time during our benchmark run. The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected.
The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time. A steep decline on the right-hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.
The PCPER Frame Time Variance File
Of all the data we are presenting, this is probably the one that needs the most discussion. In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected. As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels. Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?
We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer. However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT. To be more specific, stutter is only perceived when there is a break from the previous animation frame rates.
Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames. Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter. Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.
Previous example data
While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame Rating system to be reasonably confident in our assertions. So much in fact that I am going to call this data the PCPER ISU, which beer fans will appreciate as the acronym of International Stutter Units.
To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames. There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene. What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.
Performance looks great, at a
Performance looks great, at a great price. nVidia saw an opportunity and brought the heat to AMD’s door, with a ‘refresh’ none-the-less.
And in Canadia, VEGA 56 continues to retail for $750 (VEGA 64 for $950), while GTX 1070Ti can be had for $650. Even the recent “pitty us” price cuts won’t save VEGA. Lack of AIB cards are not helping. What’s is going on over there AMD. For god sakes, apply some logic to your thinking.
Shootin’ from the hip in another great review Ryan. Nice work!
Weird assessment there. Vega
Weird assessment there. Vega 56 selling at $750 has little-to-nothing to do with AMD’s pricing and simply signifies crypto-currency demand – they’re selling every one they make. Not exactly a “poor me” situation.
I heartedly believe that if
I heartedly believe that if AMD were selling every single card they build, without being able to keep stock due to demand, then there is absolutely no reason for AMD to provide a price cut on the VEGA line. Especially when they are confident their GPU can continue to ride the crypto-currency wave.
Looking forward to seeing how pricing develops in the coming months. As someone that uses their GPU for gaming, I’m patiently awaiting MSRP pricing to return.
edit:
So the question is,
1. Is AMD cutting price because they know crypto miners are now going to gravitate towards the 1070Ti, or;
2. Are they now forced to provide a competitive price on the VEGA line for the gaming community due to the 1070Ti?
Further, the ‘poor us’ statement was relative to AMD no longer being able to price gouge their product line now that competition came to town.
As a Canadian shopper the
As a Canadian shopper the best value is a 1080Ti. I just bought one because it’s the only video card that regularly sells below MSRP. I’ve seen them as low as $780 with blower cooler. I got a triple fan 1080ti for $860 with Shadow of War which I was going to buy anyway.
That’s pretty impressive,
That's pretty impressive, where did you score that one from?
So the real conclusion is
So the real conclusion is don’t buy a 1080, buy a 1070Ti and overclock it–if you can buy it at or close to MSRP.
While Pc Per is one of my
While Pc Per is one of my favorite sites….. the fps charts of different games is terrible. A bunch of squiggly lines running together and you have to try and figure out what’s what. At least they got it right on 3DMark and Unigine Heaven part of the review. Am I the only one who feels that way?
If you have trouble wrapping
If you have trouble wrapping your mind around the Frame Rating graphs you can try two things:
1. Re-read the page on testing methodology as a refresh.
2. Read the tables at the bottom of each page that indicate percent differences between the competitors.
I agree with the other
I agree with the other posters, I don’t even look at the stupid graph data for FPS info. I quickly scroll down to your % stats vs. The graph is NOT easy to decipher. Also, you should include some other games, your choices are all older Nvidia favoring games, makes you look like a paid NV shill.
Also some overclocked stats,
Also some overclocked stats, because who doesn’t overclock their gpu at least a little.
I agree with you 100% Ryan
I agree with you 100% Ryan please try to mix in a little bit more of a lamens quick chart like a Guru3d does with your usual Frametime charts ect.
All of this just underscores
All of this just underscores how poorly AMD is doing in this segment. The decision for full steam ahead on HBM and HBM2 in their consumer product lines has nearly killed them. Nvidia was wise to back away. No doubt NVidia is just sitting on piles and piles of differently binned chips, just waiting to unleash them. We haven’t even seen a cut down GP102, which we likely would have seen if VEGA64 had been up to snuff and soundly beaten the 1080.
If not for the renewed cryptocurrency mining craze, AMD might be close to shuttering or selling their GPU business.
I want more DX12/Vulkan
I want more DX12/Vulkan titles tested and the GTX 1070Ti is more of a 1080 lite with Nvidia having lots of binned dies that did not have a full GTX 1080’s complement of working shaders and where not able to be made into actual 1080s. So Nvidia now has a GTX 1080Ti made from those not quite 1080 grade dies that has most of the 1080s performance save a few percent. Nvidia is making more than it would have made on the 1070 so the 1070ti will take most of the sales away from the 1070 and a little less sales from the 1080/AIB business because. And it’s more about the AIB business for Nvidia this late in the Pascal Micro-arch based game.
And comments like this make me LOL:
“If not for the renewed cryptocurrency mining craze, AMD might be close to shuttering or selling their GPU business.”
Do you really think that AMD created The Vega 10 GPU micro-arch for the consumer market ONLY any more than AMD created the Zeppelin server die(Binned down into the Summit Ridge die/platform for Ryzen/Threadripper) for the consumer market ONLY. AMD will git rid of its consumer gaming business before it would ever spin off RTG, as those Vega 10 dies are more valuable for AMD in the professional GPU market outside of the consumer market that really can not afford pay a proper markup for GPUs.
AMD has its APUs also paired with it’s x86 Zen CPU micro-arch and that market will probably produce more revenues via integrated graphics revenues than AMD is currently getting from diecrete gaming GPU sales, crypto-mining not included.
AMD’s future is riding on Epyc professinal CPU SKUs, Radeon Pro WX 9100s(based on the Vega 10 die) and the Redeon Instinct MI25s(Based on the Vega 10 die) professional GPU SKUs. And AMD’s consumer sales market dependency is currently why AMD’s stock price is so volatile and has been so low for so many years. AMD is executing it’s plan to get far away from any consumer/gaming market ONLY dependency in the future for the majority of its revenues. And strangely enough so is Nvidia with its plans to not be dependent on any consumer markets where the revenues and margins do not really pay the bills.
So AMD can still sell Vega 10 based die GPUs to some coin miners(Polaris die based GPUs also), and some Ryzen/Threadripper SKUs also to the consumer market. But the real future cash cow for AMD is the Epyc and Radeon Pro WX 9100/Lower WX variants and the Radeon Instinct MI25/Lower Instinct variants that get the much higher margin markups that make AMD some profits also.
AMD could very well survive selling its GPUs to the pro markets only but AMD will continue to sell to the consumer markets, ditto for Nvidia. As the consumer markets represent the market where the underperforming Nvidia/AMD GPU dies, that do not make professional grade, can be binned and sold to recoupe some expenses and maybe turn a lttle profit. And the revenues that go along with any sales help pay some business operating expenses also.
For sure both Nvidia and AMD need those consumer revenues but both Nvidia and AMD are looking to the real market that can make the higher revenus. And even Nvidia’s non consumer market GPU/Accelerator(Compute/AI) and automotive revenues are about to surpass its consumer GPU revenuse as the major source of Nvidia’s revenues and AMD’s Epyc sales alone, not including its professional GPU accelerator sales, will be AMD’s major revenue producer with AMD consumer CPUs sales there to a much lesser degree.
AMD’s stock price volatility will dissappear once thoes Epyc sales, and professonal GPU sales revenues, surpass by a wide margin any of AMD’s combined consumer console and gaming revenues. And that consumer revenue ONLY dependency is going to be history soon enough for AMD. AMD will never spin off RTG as AMD’s APU and Professional market GPU revenues are going to become better revenue producesr than any discrete GPU ganming revenues will provide. That AI market is a big new potential revenue producer for the entire professional computing market so that AI market represents as many billions of dollars in sales as the regular server market, and that’s a large market itself.
You wasted a lot of time
You wasted a lot of time writing this assumption that no one will read
Don’t be sad cause you did.
Don’t be sad cause you did. This is the future.
Ryan is there any word and
Ryan is there any word and time frame on AIB Vega cards ? I purchased 2 Zotac Mini 1070’s for mining and would like to snag some Vega 56’s as well I swear in 5 years we are going to all look back at how horrible of a launch this was for AMD. no wonder Raja is MIA. He needs to bring his ass back to work and make sure his real baby NAVI is better than this Vega BS.
Is GDDR5 10 cents a gig yet?
Is GDDR5 10 cents a gig yet?
This is a great card I also
This is a great card I also saw this article on it and it was amazing. The NVIDIA GeForce GTX 1070 is a powerful and affordable equipment for gaming.
https://gamersconduit.com/nvidia-geforce-gtx-1070-more-powerful/
Seroiusly, you sign up and
Seroiusly, you sign up and immediately start posting links to other sites' reviews?