Testing Suite and Methodology Update
Along with a new generation of graphics cards comes an updated GPU testbed for us here at PC Perspective. A lot has changed about desktops since we last updated our GPU in 2015.
With the release of AMD's Ryzen CPUs, and Intel's subsequent Coffee Lake CPUs, core counts have been jumping up in consumer level processors compared to the stagnation of quad-core, eight-thread processors of yore.
While we usually build our GPU testbeds on Intel's HEDT desktop platform (see our previous choice of the i7-5960X), this time we decided to go with the six-core, twelve-thread Intel Core i7-8700K.
Given the increasing core counts of consumer processors, and the dimisnished focus on multiple GPU setups, we felt it was time to bring our GPU testbed to a more resonable price level.
PC Perspective GPU Testbed (2018) | |
---|---|
Processor | Intel Core i7-8700K |
Motherboard | ASUS ROG Z370-H Gaming |
Memory |
Corsair Vengeance LPX DDR4-3200 (Running at DDR4-2666) |
Storage | Samsung 850 EVO 250GB (OS) Micron 11100 2TB (games) |
Power Supply | Corsair AX1500i 1500 watt |
OS | Windows 10 x64 Version 1803 (RS4) |
Drivers |
AMD: 18.8.2 |
Discounting the overkill 1600W power supply, we are using because it was already modified for our power measurement needs, and the 2TB secondary SSD for game storage, this new build minus the GPU comes in at just around $1000. We feel this price point is a lot more reasonable than testbeds of yore (with $1600 a processor for example), and more representative of the PC gaming community at large, without providing a bottleneck to the given GPU we are testing.
In testing these new RTX series of GPUs, there are a few interesting comparisons to make. First, we want to look at generation-over-generation performance over the previous GTX-10 series products. Next, of course, we'll want to compare these new GPUs to the fastest currently existing options from both NVIDIA and AMD. The cards we will be testing are listed below:
- NVIDIA GTX 1080 (Founders Edition) – $480 (Third Party Cards)
- NVIDIA GTX 1080 Ti (Founders Edition) – $700 (Third Party Cards)
- NVIDIA RTX 2080 (Founders Edition) – $799
- NVIDIA RTX 2080 Ti (Founders Edition) – $1199
- AMD RX Vega 64 (Air Cooled reference version) – $550 (Third Party Cards)
As for the games we tested, we wanted to update our test suite with some of the most modern PC titles, while remaining a few older titles that are still immensely popular.
- Far Cry 5
- Wolfenstein II: The New Colossus
- Ashes of the Singularity: Escalation
- F1 2018
- Grand Theft Auto V
- Sniper Elite 4
- Strange Brigade
- Witcher 3
- Hitman (2016)
Our Testing Process
While the way we present our data is a bit tweaked, our testing methodology is not. We are still using the capture-based Frame Rating technique that we helped pioneer back in 2013. For those who are unaware of Frame Rating, you can read this great in-depth breakdown of the process.
As far as the data we are presenting we have simplified this a bit from years past. Instead of presenting a series of 6 graphs from every Frame Rating output, we are now focusing on two major areas—frame rate percentiles, and frame times.
For each game tested, you'll find a bar graph with the average, 95th, and 99th percentile frame rates. This will help give an idea of the relative performance of each GPUs but takes the event important frame consistency into account to determine how smoothly a game was running. Essentially, the closer the average, 95th, and 99th percentile numbers are to each other, the smoother the gaming experience.
Similarly, each game tested will feature a frame time chart. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer. A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is caused by a lot of runts displayed.
Lastly, we'll be providing a quick summary of relative performance between competing GPUs, calculated from the average frame rate.
So why did you not show us
So why did you not show us power consumption figures -AFTER- you overclocked the cards? From what I’m reading and understanding here… your displayed power consumption figures are only at stock with no overclocking applied? How can we know how much power it uses when overclocked if you wont show us? And we have to rely on you, pcper.com to show us because literally -NO ONE ELSE ON THE ENTIRE INTERNET- is showing any power consumption figures at all for the RTX 2080 Ti. So please, do show us how much it uses after being overclocked.
I see that Gold Award but it
I see that Gold Award but it sounds like a hard pass to me. Until prices drop, I don’t see why anyone would buy these cards.
This’s still confusing, so if
This’s still confusing, so if I’m coming from a GTX770 as a productivity not gaming user (architecture student), would this or the 1080ti make sense? like does anyone know if the ray tracing thing is gonna reflect into rendering software or does that seem like a far outcome and software developer dependant…I don’t really have the funds to drop on a buzzword tech and not actually be future proofing for anything.
You can check Puget Systems
You can check Puget Systems for some nice charts comparing rendering times with a variety of software you likely use, and cards. IF the 1080Ti holds to be similar to the 2080 (ray tracing not withstanding) you might expect “similar” performance… It seems some games do reasonably better on the 2080 so far. While several are improved only 5-7%. We have a lot left to learn through testing, and why some are not necessarily improved.
So, your $ might be better spent on the 1080Ti… at least with initial tests just starting to roll out.
2080Ti (still waiting) ?/*
I
2080Ti (still waiting) ?/*
I think I should have kept my NVIDIA Titan Xp, it got 17,600 on Passmark, much more, without a RAID0 Volume. But, CPU and Disk, is always first for speed, no doubt, don’t speak.
The scores posted on that web site look like what you get from 8 PCIE lanes, not 16 PCIE lanes. It is confusing, they always make the new devices look better, but, are they?
Just 17,600 vs. 14,800 right now (14,800 on passmark), and even worse without the RAID0 volume, 18,000 vs. there 14,800).
What are you supposed to believe for 2080 and 2080Ti exactly?
understands that there is
understands that there is software that improves video viewing through an NVIDIA video card
https://developer.nvidia.com/rtx/ngx
1. Not exactly explained on the site How does it work ?
Is it real-time movie viewing software or film production ?
2. Suppose I install the SDK, how do I run the software ? Driver that will give me high quality video ?
How does this quality?
3. What video card do I need to get the best video quality ?
4. Do you need a special movie player ? Do it work with YouTube ?
5. DLSS: What Does It Mean for Game Developers?
Is this software to create games? Not suitable for those who want to improve videos (in real time)
https://news.developer.nvidia.com/dlss-what-does-it-mean-for-game-developers/
Nothing is understood in this software
Would someone please test the
Would someone please test the 2080 TI via Cinebench? I’ve seen so many tests, but not one test of Cinebench.
What would be really nice is to compare the 2080 TI and the 1080 Ti’s performance via Cinebench……
Please?
Why are the cards not sorted
Why are the cards not sorted by performance?
Vega always put on bottom regardless if it beats 1080…
Is that supposed to be a psycho trick?