3840×2160 Game Benchmarks
We begin the 4K (technically UHD, I know) game tests with Ashes of the Singularity: Escalation. Run again using DX12, and this time with the "standard" preset. As you will see it would serve us better to re-test all of these games with higher texture quality to really push the limits of available VRAM, and that update to this story is already planned. For now at least consider these an indication of what the Radeon VII can do using "normal" quality settings at 3840×2160.
While it did not end up in first place this time, the difference in performance was not noticable between the Radeon VII and RTX 2080, the latter of which provided an average of about 2 FPS better performance with these settings. Frame time variance was also about the same between these two cards at the top of the chart:
Next we have Far Cry 5, and look what happens when we move up to 3840×2160, even at "normal" preset settings:
About a 2 FPS advantage here for AMD, essentially flip-flopping the first result with Ashes. It's worth noting that we aren't even close to maxing out the VRAM of most of the cards on test at this "normal" preset even at "4K" resolution, with the optional HD textures not enabled (again, something to re-visit as we find ways to use more of the Radeon VII's massive 16GB of memory).
Frame times were very consistent, with a variance of less than 3 ms between average and 99th-percentile for the Radeon VII, just ahead of the RTX 2080:
Moving on to the sole racing game in this group, F1 2018 is run this time using the "medium" settings at 3840×2160 resolution.
Radeon VII remains in front of the RTX 2070, but lags behind the RTX 2080 by a significant amount. We need only to look at frame times to understand why, as both the Radeon VII and RX Vega 64 average the same 20 ms 99th percentile frame times, hurting overall performance and perceived smoothness:
Next we have Middle Earth: Shadow of War, another DX11 title, this time run with the "medium" preset and 3840×2160 resolution.
The Radeon VII is less than 4 FPS away from the RTX 2080, but this game is another victory for the NVIDIA card. The Radeon VII does provide a nearly 9 FPS advantage over the RTX 2070 in this test, and its frame times were tied with the RTX 2080 for most consistent (2.8 ms difference between avg. and 99th percentile) among the cards tested:
Now we come back to Shadow of the Tomb Raider, run this time with DX12 at the lower "medium" preset as we move up to 3840×2160.
This time the Radeon VII is ahead of the GTX 1080, but still behind the RTX 2070. Higher frame time spikes and a noticeable lack of smoothness is part of the issue here:
Moving away from potentially controversial game choices that seem to favor NVIDIA GPUs, we will look again at the "canned" benchmarks from standalone Final Fantasy XV and World of Tanks enCore applications. FFXV is first, run as before with the "standard" preset to minimize GameWorks involvement:
Once again Radeon VII has a solid showing, and is a little behind the RTX 2080. Both cards offered smooth overall results, in keeping with most of the tested cards:
Finally we come again to World of Tanks enCore, run again using the "ultra" preset as otherwise it just isn't much of a challenge for many of these GPUs.
Once again the Radeon VII sits between the RTX cards at the top of the chart, which was the story of these benchmarks overall. Frame times were not as consistent as we'd like for most of the cards on test, though a little better than we saw at 1440:
So that is the gaming performance story so far, though as I mentioned on the previous page it behooves us to test at higher settings to see how much of an advantage we can find using a graphics card with this much available RAM, though we must find a balance between higher VRAM utilization and acceptable frame-rates to make it viable.
Before publishing I had been experimenting with some very aggressive settings in some of the above games, but seeing frame rates far too low to consider the results acceptable. More time is needed to find this balance, and in the meantime AMD's Scott Wasson (Sr. Manager, Product Management, Radeon Technologies Group) has written a blog post covering the VRAM topic in-depth, and he demonstrates how the 16GB HBM2 of Radeon VII can uncap performance in certain instances.
"Of course, tools that measure VRAM allocation don’t always tell the whole story. Applications sometimes fail to let go of bits they’re no longer actively using, so exceeding your video card’s indicated VRAM usage doesn’t always lead to obvious complications. When a system does run up against a VRAM limit in practice, though, the result can be severe slowdowns or even instability. Here’s one example of what happens when a game overruns the video card’s VRAM capacity.
Below is a plot of the frame times over time while walking through the Montana forest in Far Cry 5."
In my own testing Far Cry 5 was less memory intensive than I would have expected, even at 4K and with the HD textures enabled. What the chart above is demonstrating is a use-case involving dynamic resolution, and that is an area where you can easily extend past 8GB of VRAM. From the footnotes, the above slide used "Far Cry 5 configured at 3840×2160 vis VSR/DSR, 2560×1440 target display, Ultra quality presets, HDR10, dynamic resolution enabled".
VSR (virtual super resolution) on the AMD side, and DSR (dynamic super resolution) on the NVIDIA side, are interesting technologies that essentially present a much higher resolution to the operating system and supported games than your monitor might be capable of supporting, with the rendered image then downsampled to fit your display. This provides improved visuals with a performance penalty, as you might expect, so for instances where large quantities of VRAM can make a difference obvously Radeon VII would have an advantage among gaming GPUs. Whether the GPU core itself can handle running at very high resolutions is another matter, as frame rates can be quite low using VSR/DSR, depending on settings.
So intresting and nice to see
So intresting and nice to see AMD at least “keeping” up with RTX 2080 (win some/loose some)..
Price of course will be a concern (and potential availability)
Looking at it from compute metrics, especially Blender, I have a feeling that for the price of single Radeon 7, I can get 2 VEGA 64, downclock them and have far more rendering performance..
Still that 16GB buffer is a nice touch.
Gaming wise, that is an “easy” pick. If you love AMD then this is a nice boost and almost sufficient to spend your money on an upgrade. If you like Nvida, then clearly RTX 2080 is the best choice.
Nice that we have now a card for both markets.
Now AMD just needs NAVI out and get some RTX 2080ti competition
Sebastian, according to
Sebastian, according to AnandTech AMD did a last minute firmware / driver update to enable DP at 1/4 SP. So last minute that it happened after the review cards were sent out.
https://www.anandtech.com/show/13923/the-amd-radeon-vii-review/3
I saw that a few minutes ago
I saw that a few minutes ago at AnandTech. 1/4SP rate now rather than 1/16SP. Updated. Thank you!
About the noise. Have you
About the noise. Have you tried changing the fan curve so the temps are more like the 2080?
When I buy a GPU I look at price, performance and noise. Sadly AMD couldnt compete in all 3 at the same time for a while now. And this card fails on the noise part whith the standard fan curve :C
looking at Anandtech’s review
looking at Anandtech’s review of the Radeon VII and it still has more of a prosumer feel because of the 16GB of VRAM and the 1/4 DP FP to 1 SP FP ratio. And while the Radeon VII’s DP FP ratio is not as good as the MI50’s 1/2 DP FP to 1 SP FP ratio it’s still better than Vega 10’s 1/16 DP FP to 1 SP FP ratio.
I think a lot of the power used can be attributed to that extra DP FP power on Radeon VII. There are some additional AI focused Instructions added also on Vega 20 above what Vega 10 offers. And more needs to be asked of AMD about that and some prosumer usage or even graphics related uses of Vega 20’s additional AI related ISA extentions on the Vega-2 GPU micro-arch. All of the major graphics software packages have some AI related Image Filters and effects added and that was done before even Nvidia’s Volta was released with the AI workloads being accelerated on the GPU’s shader cores instead of dedicated tensor cores.
The ROP’s available bandwith has also been improved on Radeon VII and maybe there are other tweaks that will becme known as the whitepapers become available.
Also the drivers for Radeon VII have to have more time to fully mature so maybe a few more rounds of benchmarking will have to be done once the next round of driver updates is available. Radeon VII’s Fine Wine(TM) may be somewhat different depending on the TSMC 7nm process and production becoming more mature over time. The Vega 20 die production has been ongoing since 2018 and the better die bins are being used for Radeon Instinct and Radeon Pro WX production so Radeon VII is not getting the top binned Vega 20 die output.
My biggest difference with the Anandtech article is their speculation on TSMC’s yields on Vega 20 and Anandtech very well knows that because the die size at 7nm is smaller then that equates to more DIEs/Wafer and the yields will actually go up as a result of there being more DIEs/Wafer.
It looks like a good time for AdoredTV to do a review of what the various review sites’ Radeon VII review content for some peer reviewed fact checking.
I’m also suspicious of all this speculation on the costs of the 4GB HBM2 die stacks on Radeon VII as the Professional Compute/AI markets make more use of the 8GB and Higher capacity HBM2 stacks rather than the 4GB HBM2 stacks. And that amended JEDEC HBM2 standard allowes for even higher capacity HBM2 per stack capacities than 8GB.
So really those 4GB HBM2 stacks may not be as costly anymore to produce but who really knows for sure. HBM2 is used in many more products than just GPUs so maybe the HBM2 makers have had enough HBM2 in production for a long enough time period to fully amortize their initial HBM2 R&D and equipment costs. Both SK Hynix and Samsung have had HBM2 production ongoing so there should be a little more competative price pressures on the 4GB HBM2 stacks as the R&D and tooling/equipment costs become fully amortized and the HBM2 makers have more room to lower their respective HBM2 pricess in order to attempt take market share.
What about andy Dual Radeon VII Benchmarking for games that make us of DX12’s and Vulkan’s Explicit Multi-GPU adaptor ability. It looks like xGMI(Infinity Fabric) links are not available for consumer Radeon VII and AMD is starting to segement it’s cosnumer GPU variants from is Pro GPU variants a little more than AMD has done in the past. I hope that this means that Navi will be more gaming oriented with a higher Ratio of ROPs to Shader cores and that Mainstream Navi with have much higher pixel fill rates than the Polaris Mainstream offerings.
Edit: What about andy
To:
Edit: What about andy
To: What about any
Bad edit. I’m very interested
Bad edit. I’m very interested in Andy. Tell me about Andy.
(No subject)
Maybe his girlfriend’s been
Maybe his girlfriend’s been testing the Linux 5.1 Kernel against her Qualcomm device(1) and not paying much attention to Andy!
Andy’s Girlfriend: Qualcommmmmmmmmmmmmmm…!
(1)
“Qualcomm Vibrator Driver Queued For Linux 5.1”
https://www.phoronix.com/scan.php?page=news_item&px=Qualcomm-Vib-Driver-Linux-5.1
GamersNexus is not happy with
GamersNexus is not happy with the redone API calls on Radeon VII that are making things impossible to validate!
“Because AMD completely overhauled its API calls for this card, no current software utilities work for it. Afterburner is broken, GPU-z needs an update (and its creator is on vacation), and Wattool is also largely non-functioning. This leaves us with AMD’s WattMan, which is also presently in a largely unusable state.”(1)
(1)
“AMD Radeon VII Review: Rushed to Launch (& Pad vs. Paste Test)”
https://www.gamersnexus.net/hwreviews/3437-amd-radeon-vii-review-not-ready-for-launch
Typically dog on AMD but at
Typically dog on AMD but at least i think they have a card that will be decent come launch titles in winter 2020 due to that memory buffer. The games will probably be unoptimized for pc as usual but at least there is more hope there to at least hold out until 2021 when the real next gen pc cards come out.
indie devs can probably put this card to good use
now lets see what Nvidia has planned for Valentines Day
No gtx 1080 Ti in games
No gtx 1080 Ti in games tested. for shame.
Isn’t the ROP count of the
Isn’t the ROP count of the Radeon 7 supposed to be 60 instead of 64?
Compute units are 60 on the
Compute units are 60 on the Radeon 7 where they were 64 on vega. ROPs remain the same at 64 last I checked.
It seems to me like AMD isn’t
It seems to me like AMD isn’t interested in taking the performance crown from Nvidia. The VII is basically the same as a watercooled (overclocked) Vega64 with double the memory bandwidth. I wonder how close the watercooled Vega compares to the VII.
Rather than heaping on more cores/transistors when going to 7nm, they decided to up the clockspeed a bit to get it competitive with RTX 2080 and leave it as a smaller die to help their profit margins (and R&D costs). The transistor size is 5.6% higher than Vega64, while the 2080’s number is 88% higher than the 1080. By contrast, Nvidia added cores/transistors when going from 16nm to 12nm to accommodate the additional cores and AI and raytracing bits. Nvidia went from Pascal to Turing, but the VII is still Vega.
Radeon VII die: 331 mm2
RX Vega 64 die : 495 mm2
2080 RTX die: 545 mm2
1080 GTX die: 314 mm2
According to these benchmarks, there isn’t much difference between the VII and the 2080, but you also not getting the fancy antialiasing and raytracing stuff. I also read something a while back about some sort of VR optimization in Pascal called simultaneous multi-projection. Did that ever catch on? The extra memory is nice I guess, but I’m not sure how useful it is.
I don’t know why I’m obsessing over this stuff though, since I won’t be upgrading the 1070 I bought on sale before the crypto craze took off. My old rule of thumb is to upgrade when a new card comes out that is 2x the speed at the same price. These are 2x the speed at 2x the price.
They’ve never gone higher
They’ve never gone higher than 4096 cores. Seems a GCN limit more than a conscious choice.
Maybe Nvidia is using 9.5T
Maybe Nvidia is using 9.5T libraries and more fins per cell than AMD who uses 7.5T librares and less fins per cell. The 9.5T libreries take up more room(More Fins per Cell) but they afford Nvidia the option of creating more 4 fin transistors that can be driven to higher clocks than 7.5T libraries with less fins per cell and more 3 and 2 fin transistrs that can not be driven to higher clock rates.
Vega 20 is initially a data center part so 7.5T lower power libraries and greater density is what AMD was targeting. Radeon VII’s die at 331 mm^2 is sure to allow for more DIEs/Wafer and that’s a Known way to increase Yields for any processor/other chip maker.
It’s not the GCN or Vega Micro-Arch that’s the problem for AMD and gamng it’s the tessellation and rasterization deficiencies that are the result of AMD’s Die Tapeouts that are at fault and that’s because AMD does not design Gaming Only focused designs. Nvidia has a 5+ Different Tapeouts lead to AMD’s Only 1 or 2 different Desktop Die Tapeouts per generation.
The GP102 and TU102 Die Tapeouts still offer 96 available ROPs that Nvidia prunes back to 88 ROP’s to bin out its GTX 1080Ti and RTX 2080Ti series of consumer flagship gaming variants. And both the GP102 and TU102 Base Die Tapeouts have more Quadro variants before Nvidia makes the bottom binned DIe for its flagship gaming variant based off of those GP102/TU102 Base Die Tapeouts.
AMD just needs a Tapeout with 96+ available ROPs and the better tessellation resources. So really each GPU tapeout costs in the millions for the mask sets and the Wafer Start capacity at the chip fab. Nvidia’s spends billion+ dollar figures on all its various different Base Die tapeouts where AMD can only afford at most 1 or 2 per generation of Desktop GPU.
AMD’s APU/Laprop market share numbers are around 12% now so the Vega based graphics installed base on devices is going to be rather high and going higher with each new Raven Ridge APU/Vega integrated Graphics based Laptop Sold. Desktop Raven Ridge is popular also so that’s even more integrated Vega graphics marketshare.
Gamers should be attacking AMD’s Base Die tapouts for not having more available ROPs to increase the pixel fill rates and that’s where AMD’s lack of cash for RTG hurts the most. AMD really need a gaming only oriented Desktop GPU Base Die Tapeout, maybe even 2 Gaming focused Desktop Die Tapeouts similar to Nvidia’s GP104/TU104 and GP106/TU106 Base Die tapeouts for the mainstream gaming market. AMD should be also going after the Pro Graphics market with some Base die tapeout similar to Nvidia’s pro oriented GP102/TU102 Tapeouts where the lowest binned variant becomes the Flagship Gaming GPU variant.
But that all takes money that AMD currently does not have, and RTG has to pay its own way so that’s even less available for consumer gaming focused GPU Base Die Tapeouts.
And it’s only refined Vega.
And it’s only refined Vega. Don’t get me wrong. AMD learned in VII from unmitigated disaster that VI was.
I like the HBM2 16GB I really do, but it still is only a Vega. It still cannot really compete on Windows platform with 1080Ti which is 2 years old.
No IRAY support means I won’t even try it. But I’m sure on Linux or Mac where support for AMD is light years ahead of Windows VII will be a blast, just as long say rendering engine doesn’t require nVidia IRAY… 😉
Vega 10 is most definitly not
Vega 10 is most definitly not a failure in the HPC/AI market and Gamers are where the non performant GPU DIEs go that do not maket the grade to become Pro Parts. The Miners made Vega 10 a winner from a business perspective even if the gaming market may not have.
Radeon VII’s gaming drivers at release are not very optimized as is that refactored API that had all of the GPU Vitals/Metrics reporting Third Party Software needing the have their code base refactored also to work for Radeon VII.
So I trust the initial benchmarks even less and I never pass judgment on any newely released GPU until at least 3 months afterwards when the the kinks are worked out.
Vega 20 is most definitely more prosumer than Vega 10 what with Vega 20’s DF FP rate at that 1/4 DP FP to 1 SP FP ratio. The 16GB of HBM2 is better for aminations graphics workloads where animation scene assets can easily be larger than 16GB and Vega’s HBCC/HBC IP that can turn the HBM2 into a last level VRAM Cache for the GPU with any non recently needed assets(Textures/Mesh Data) paged out to/from system DRAM. Animation rendering on Radeon VII is going to go smoothly for sure.
The Vega-1 Micro-Arch has been supplanted by the Vega-2 Micro-arch on Vega 20 with there being 50-something more AI related instructions added on the Vega-2 ISA. MY beggest disappointment is the Lack of xGMI links on Conumer Radeon VII compared to the Pro Vega 20 based parts.
“Radeon VII’s gaming drivers
“Radeon VII’s gaming drivers at release are not very optimized… … …So I trust the initial benchmarks even less and I never pass judgment on any newely released GPU until at least 3 months afterwards when the the kinks are worked out.”
If AMD cannot even provide decent drivers for a GCN based card in 2019, the next 3 months (or years) aren’t going to change much.
Secondly, considering AMD’s track record in support and the limited availability and production of RVIIs, you can bet your bottom dollar this product is going to receive limited support and optimization.
They completely overhauled
They completely overhauled the API calls for Radeon VII so GamersNexus(1) had no way of properly reviewing the SKU.
“Because AMD completely overhauled its API calls for this card, no current software utilities work for it. Afterburner is broken, GPU-z needs an update (and its creator is on vacation), and Wattool is also largely non-functioning. This leaves us with AMD’s WattMan, which is also presently in a largely unusable state.” (1)
So Yes this product needs more time in the software/firmware oven before it’s fully done and any new GPU product will have issues. AMD’s been better with it’s products but this is definitely a regression in some respects of AMD’s driver/API launch support.
It’s AMD’s flagship consumer/prosumer 7nm offering until Navi is ready(Oct 2019 rumored Navi release)! So I think that it will recieve more attention than you think.
Both Vega 10 and Vega 20 are really not gaming only Focused GPU Base Die Tapeouts and unless you have half a billion+ dollars to loan AMD to afford to create 5+ different Desktop oriented Base Die Tapeouts per generation like Nvidia can afford then AMD/RTG will continue to lack the resources of Nvidia.
Nvidia has specific Gaming oriented Base Die Tapeouts that are stripped of excess compute and have more gaming oriented focused layouts. GP104/TU104 as well as GP106/TU106 are mostly for mainstream gaming and GP102/TU102(More Quadro Variants and only one gaming variant) are more like Vega 10/Vega 20 except that Nvidia has much higher numbers of available ROPs on GP102/TU102, 96 ROPs available on those Base Die Tapeouts. So Nvidia still leads even Vega 20 with GP102’s/TU102’s 88 out of 96 available ROPs enabled and a much higher Pixel fill rate than Vega 20.
Vega 20, The Redeon VII derivative of Vega 20, is most definitely not a gaming only focused SKU with that 1/4 to 1 DP FP to SP FP ratio. So that’s 3,360 GFLOPS of DP FP performance on Radeon VII compared to the RTX 2080TI’s 420.2 GFLOPS (1:32) DP FP : SP FP performance. And Vega 20 is not just a die shrink of Vega 10 as there are New Deep Learning/AI instructions added on the Vega-2(Vega 20) GPU Micro-Arch that Vega-1(Vega 10) GPU Micro-Arch does not support.
And 16GB of HBM2 and the content creators are going to like Radeon VII even more. And 16GB of HBC if that HBM2 is used as a VRAM Cache to a larger pool of Virtual VRAM paged to/from system DRAM By Vega 20’s HBCC.
So large complex animation scenes with way more than 16GB of Texture/Mesh assets and Radeon VII’s HBCC able to use that 16GB of HBM2 as a last level HBC and even more Virtual VRAM paged out to and from system DRAM in the background while the GPU will only have to work from the 16GB of HBM2, the HBCC will take care of getting that in the background swapping done.
So it’s a GCN based card in 2019 with its API so completely refactored that the current software utilities need to be updated to work with that new API. Vega 20’s GPU Micro-Arch has been expanded with new AI related Instructions and more tweaks are still to be revealed for Vega 20’s tapeout. Vega 20 die production has been ongoing since 2018 with the Radeon Instinct MI60/MI50 SKUs released in Q4 2018.
So not much is known about Vega 20’s current microcode version or what changes been done for Radeon VII’s current microcode version but there are always tweaks that come with shrinks and new tapeouts.
From Anandtech’s Vega 20 review(2):
“The big improvement here is all of that extra memory bandwidth; there’s now over twice as much bandwidth per ROP, texture unit, and ALU as there was on Vega 10. The bodes particularly well for the ROPs, which have traditionally always been big bandwidth consumers. Not stopping there, AMD has also made some improvements to the Core Fabric, which is what connects the memory to the ROPs (among other things). Unfortunately AMD isn’t willing to divulge just what these improvements are, but they have confirmed that there aren’t any cache changes among them.” (2)
So until the Vega 20 ISA manuals and whitepapers are available it’s still too early to make an informed decision.
And if you are only interested in gaming workloads it’s been a long known fact that Nvidia has the funds to afford those specifically gaming oriented Base Die Tapeouts whereas AMD lacks the funding to afford that sort of specilization.
(1)
“AMD Radeon VII Review: Rushed to Launch (& Pad vs. Paste Test)
By Steve Burke Published February 07, 2019 at 9:02 am”
https://www.gamersnexus.net/hwreviews/3437-amd-radeon-vii-review-not-ready-for-launch
(2)[See page 2 of article “Vega 20 Under The Hood:”
“The AMD Radeon VII Review: An Unexpected Shot At The High-End”
https://www.anandtech.com/show/13923/the-amd-radeon-vii-review/
The card is trash. It’s no
The card is trash. It’s no different than the Vega 64 vs 980Ti head to head. All credible reviewers and gamers recommended and chose the 980 To over it. There is NO reason for gamers to choose it over the 2080. NONE. AMD (Radeon) is not back, and now it looks like you’ll be waiting longer for the RX cards and resupplies of the VII.
Another soft launch for AMD. Not impressed.
Vega 64 was never in
Vega 64 was never in competition with the 980Ti – that would be the Fury X. Back then (and again with Vega 64 vs. GTX 1080) there were still two reasons to go with AMD even though they lost on performance and power: FreeSync support and price.
Now that Nvidia has finally enabled Adaptive Sync on their products, I’m inclined to agree that this card has no real selling point for the majority of users. It’s not “trash”, though, and a price cut could make it into a very solid option, especially for a user prepared to gamble on some power/noise gains via undervolting.
It blows my mind how poorly GCN upscales. I have a Ryzen 3 2200g and it impresses me how many recent games it can run at 900p or even 1080p, yet multiply its core count by 7, clock speed by 1.5, TDP by 5, and bandwidth by a googleplex, and … this is what you get…
Somehow I left that 2200G comparison out of the review 😉