Early testing for higher end GPUs
We tested a handful of AMD and NVIDIA graphics cards in the brand new Rise of the Tomb Raider released this week!
UPDATE 2/5/16: Nixxes released a new version of Rise of the Tomb Raider today with some significant changes. I have added another page at the end of this story that looks at results with the new version of the game, a new AMD driver and I've also included some SLI and CrossFire results.
I will fully admit to being jaded by the industry on many occasions. I love my PC games and I love hardware but it takes a lot for me to get genuinely excited about anything. After hearing game reviewers talk up the newest installment of the Tomb Raider franchise, Rise of the Tomb Raider, since it's release on the Xbox One last year, I've been waiting for its PC release to give it a shot with real hardware. As you'll see in the screenshots and video in this story, the game doesn't appear to disappoint.
Rise of the Tomb Raider takes the exploration and "tomb raiding" aspects that made the first games in the series successful and applies them to the visual quality and character design brought in with the reboot of the series a couple years back. The result is a PC game that looks stunning at any resolution, but even more so in 4K, that pushes your hardware to its limits. For single GPU performance, even the GTX 980 Ti and Fury X struggle to keep their heads above water.
In this short article we'll look at the performance of Rise of the Tomb Raider with a handful of GPUs, leaning towards the high end of the product stack, and offer up my view on whether each hardware vendor is living up to expectations.
Image Quality Settings Discussion
First, let's talk a bit about visuals, image quality settings and the dreaded topic of NVIDIA GameWorks. First, unlike the 2013 Tomb Raider title, Rise of the Tomb Raider is part of the NVIDIA "The Way It's Meant To Be Played" program and implements GameWorks to some capacity.
As far as I can tell from published blog posts by NVIDIA, the only feature that RoTR implements from the GameWorks library is HBAO+. Here is how NVIDIA describes the feature:
NVIDIA HBAO+ adds realistic Ambient Occlusion shadowing around objects and surfaces, with higher visual fidelity compared to previous real-time AO techniques. HBAO+ adds to the shadows, which adds definition to items in a scene, dramatically enhancing the image quality. HBAO+ is a super-efficient method of modeling occlusion shadows, and the performance hit is negligible when compared to other Ambient Occlusion implementations.
The in-game setting allow for options of Off, On and HBAO+ on all hardware. To be quite frank, any kind of ambient occlusion is hard to detect in a game while in motion, though the differences in still images are more noticeable. RoTR is perhaps the BEST implementation of AO that I have seen in a shipping game and thanks to the large open, variably lit environments it takes place in, seems to be a poster child for the lighting technology.
That being said, in our testing for this story I set Ambient Occlusion to "On" rather than HBAO+. Why? Mainly to help dispel the idea that the performance of AMD GPUs is being hindered by the NVIDIA GameWorks software platform. I'm sure this won't silence all of the conspiracy theorists, but hopefully it will help.
Other than that, we went with the Very High quality preset, which turns out to be very strenuous on graphics hardware. If you don't have a GTX 980 or R9 390 GPU (or better), chances are good you'll have to step down some from that even at 2560×1440 or 1920×1080 to get playable and consistent frame times. Our graphs on the following pages will demonstrate that point.
Testing Setup
For this short sample of performance we are comparing six different graphics cards with matching prices points from AMD and NVIDIA.
- $650
- NVIDIA GeForce GTX 980 Ti 6GB
- AMD Radeon R9 Fury X 4GB
- $500
- NVIDIA GeForce GTX 980 4GB
- AMD Radeon R9 Nano 4GB
- $350
- NVIDIA GeForce GTX 970 4GB
- AMD Radeon R9 390 8GB
I tested in an early part of the Syria campaign at both 2560×1440 and 3840×2160 resolutions, both of which were hard on even the most expensive cards in the comparison. Will the 6GB vs 4GB frame buffer gap help the GTX 980 Ti in any particular areas? How will the R9 390 with 8GB of memory compare to the GTX 970 with 4GB configuration that has long been under attack?
This also marks the first use of our updated GPU testbed hardware, seen in the photo above.
PC Perspective GPU Testbed | |
---|---|
Processor | Intel Core i7-5960X Haswell-E |
Motherboard | ASUS Rampage V Extreme X99 |
Memory | G.Skill Ripjaws 16GB DDR4-3200 |
Storage | OCZ Agility 4 256GB (OS) Adata SP610 500GB (games) |
Power Supply | Corsair AX1500i 1500 watt |
OS | Windows 10 x64 |
There is this rumor that you
There is this rumor that you can buy it at $20 in russia,and play it elsewhere via family sharing.Really?
Nixxes have stated on the
Nixxes have stated on the steam forums:
“Also note that textures at Very High requires over 4GB of VRAM, and using this on cards with 4GB or less can cause extreme stuttering during gameplay or cinematics.”
http://steamcommunity.com/app/391220/discussions/0/451852225134000777/
This may explain the frame time spikes that were seen.
Ah….Maybe I will try it on
Ah….Maybe I will try it on high instead of Very High. If this fixes it I will be happy. But a little sad because of the 4GB limit issue. Oh well…bring on the 8GB HBM2 cards in the summer…or there abouts. 🙂
Running the game great in 4k
Running the game great in 4k with the sli hack for my twin 980 tis. Finally a game that actually uses them fully! Well, Far Cry 4 did a good job too, but the lack of sli support recently was getting me salty.
Is PCPer not going to do
Is PCPer not going to do 1080P benchmarks anymore? Few people have 1440 and even fewer have 4K (1.28% and 0.7% according to the last Steam Survey).
While beautiful, this is the
While beautiful, this is the most VRAM hungry title I have every used! I cranked everything up, setting-by-setting, just to see what would happen. No complaints on performance. I never saw a dip below 60 fps last for more than a split second, and it only went down to 57 fps (only saw this twice during an hour of play).
But what was surprising was 8.2 GB of VRAM usage at 1080p – that’s nutty! I can’t say I’m entirely surprised given how amazing the game looks, but I’m surprised that my Titan X is essentially necessary to max out this game.
Funny. I think that it would
Funny. I think that it would be better if the game used the whole 12GB of VRAM. What is the benefit of unused VRAM to you?
Hi Ryan.AMD Crismon 16.1.1
Hi Ryan.AMD Crismon 16.1.1 is Out.Please Rerun Benchmark.Thanks
Re-Run it NOW
Re-Run it NOW !!!
.
.
.
.
.
.
.
.
.
.
.
please..
Test Geothermal Valley /
Test Geothermal Valley / Soviet Installation or gtfo.
So for the updated article,
So for the updated article, you decided to showcase Nvidia’s best performing card, and AMD’s weirdest performing card and nothing else? Are you serious? You should’ve thrown in the 970 and 390 as well.
Yea, WTF Ryan? Your lil Miss
Yea, WTF Ryan? Your lil Miss will cry shame. She will NOT be proud of daddy on this one.
If the 980 Ti (non-SLI) was tested, then surely your single Fury X should have been – if only to see the difference, if any. Its understandable that they cant all be re-tested (work load, time constraints, etc), but we’re not talking about Peggle Nights here; this is a huge title that challenges the top tier cards. So those should be the focus.
The difference between Nano
The difference between Nano and Fury X is so small that it should not be a factor when choosing between AMD and Nvidia for dual cards setup. You can easily add those 5-10% and calculate Fury X perf if you want. And there are not too many Nano benchmarks out there – I always love to see them 🙂
The nano could be throttling
The nano could be throttling in some cases due to the strict thermal limitations. I would have preferred that they use the Fury or Fury X instead, but if they don’t have dual Fury/X cards available, then that just wasn’t an option.
Are you running reference
Are you running reference 980ti? Because my GIGABYTE G1 Gaming card is getting better results, just curious.
Since you favor AMD over Nvidia I figured you would run a reference card with a pink chart color,so it doesn’t make AMD Fury X look so bad….hahahah just playing, but seriously my card kicks ass!!!
Yes, the pink line is hard to see on some phones. 😛
I keep forgetting that any
I keep forgetting that any review that is not an immediately glowingly perfectly admiringly wonderfully perfect review of an Nvidia product IMMEDIATELY MEANS THE REVIEWER IS AN AMD FANBOY.
Jesus. Some of you Nvidia people are pathetic.
I think holyneo was
I think holyneo was joking…..:)
I was, glad you get
I was, glad you get it.
😉
Sorry for those that didn’t, well not really. Take a deep breath, slowly walk away from the computer, ponder my reply till a smile forms onto your face.
Can you please run the frame
Can you please run the frame time test with R9 280 or 280x? That would help figure out if it is GCN 1.2 or 4gb vram that is causing the spikes with fury x and nano. I know 980 doesn’t have them but that doesn’t mean much because of so many different variable such as drivers.
Sorry I meant 290 or 290x
Sorry I meant 290 or 290x
Can you confirm that R9 Nano
Can you confirm that R9 Nano was not throttling during the testing?
I own nano crossfire and I
I own nano crossfire and I under clock to 900mhz to keep the top card from throttling. Left at stock the spikes are a bit worse/often. Overclocking is out of the question. There are still spikes at a steady 900mhz. I would really like to see benchmarks with r9 290x crossfire vs r9 390x crossfire to test if the spikes are based on the ram. Obviously drivers could improve the nano/fury crossfire either way because 980 sli does not have the same spikes.
The comments are too funny!
I
The comments are too funny!
I feel like this is a two party system, NVidia or AMD! So I’m going to vote Intel Integrated Graphics are better than both combined. Yea for independents!
Would be great to see this
Would be great to see this revisited now that patch 1.0.668.1 (Patch #7) is out with improved async compute support. The addition of Polaris benches would be valuable too.