An interesting night of testing
We have the first review!
Last night I did our first ever live benchmarking session using the just-arrived Radeon Vega Frontier Edition air-cooled graphics card. Purchased directly from a reseller, rather than being sampled by AMD, gave us the opportunity to run testing for a new flagship product without an NDA in place to keep us silenced, so I thought it would be fun to the let the audience and community go along for the ride of a traditional benchmarking session. Though I didn’t get all of what I wanted done in that 4.5-hour window, it was great to see the interest and excitement for the product and the results that we were able to generate.
But to the point of the day – our review of the Radeon Vega Frontier Edition graphics card. Based on the latest flagship GPU architecture from AMD, the Radeon Vega FE card has a lot riding on its shoulders, despite not being aimed at gamers. It is the FIRST card to be released with Vega at its heart. It is the FIRST instance of HBM2 being utilized in a consumer graphics card. It is the FIRST in a new attempt from AMD to target the group of users between gamers and professional users (like NVIDIA has addressed with Titan previously). And, it is the FIRST to command as much attention and expectation for the future of a company, a product line, and a fan base.
Other than the architectural details that AMD gave us previously, we honestly haven’t been briefed on the performance expectations or the advancements in Vega that we should know about. The Vega FE products were released to the market with very little background, only well-spun turns of phrase emphasizing the value of the high performance and compatibility for creators. There has been no typical “tech day” for the media to learn fully about Vega and there were no samples from AMD to media or analysts (that I know of). Unperturbed by that, I purchased one (several actually, seeing which would show up first) and decided to do our testing.
On the following pages, you will see a collection of tests and benchmarks that range from 3DMark to The Witcher 3 to SPECviewperf to LuxMark, attempting to give as wide a viewpoint of the Vega FE product as I can in a rather short time window. The card is sexy (maybe the best looking I have yet seen), but will disappoint many on the gaming front. For professional users that are okay not having certified drivers, performance there is more likely to raise some impressed eyebrows.
Radeon Vega Frontier Edition Specifications
Through leaks and purposeful information dumps over the past couple of months, we already knew a lot about the Radeon Vega Frontier Edition card prior to the official sale date this week. But now with final specifications in hand, we can start to dissect what this card actually is.
Vega Frontier Edition | Titan Xp | GTX 1080 Ti | Titan X (Pascal) | GTX 1080 | TITAN X | GTX 980 | R9 Fury X | R9 Fury | |
---|---|---|---|---|---|---|---|---|---|
GPU | Vega | GP102 | GP102 | GP102 | GP104 | GM200 | GM204 | Fiji XT | Fiji Pro |
GPU Cores | 4096 | 3840 | 3584 | 3584 | 2560 | 3072 | 2048 | 4096 | 3584 |
Base Clock | 1382 MHz | 1480 MHz | 1480 MHz | 1417 MHz | 1607 MHz | 1000 MHz | 1126 MHz | 1050 MHz | 1000 MHz |
Boost Clock | 1600 MHz | 1582 MHz | 1582 MHz | 1480 MHz | 1733 MHz | 1089 MHz | 1216 MHz | – | – |
Texture Units | ? | 224 | 224 | 224 | 160 | 192 | 128 | 256 | 224 |
ROP Units | 64 | 96 | 88 | 96 | 64 | 96 | 64 | 64 | 64 |
Memory | 16GB | 12GB | 11GB | 12GB | 8GB | 12GB | 4GB | 4GB | 4GB |
Memory Clock | 1890 MHz | 11400 MHz | 11000 MHz | 10000 MHz | 10000 MHz | 7000 MHz | 7000 MHz | 1000 MHz | 1000 MHz |
Memory Interface | 2048-bit HBM2 | 384-bit G5X | 352-bit | 384-bit G5X | 256-bit G5X | 384-bit | 256-bit | 4096-bit (HBM) | 4096-bit (HBM) |
Memory Bandwidth | 483 GB/s | 547.7 GB/s | 484 GB/s | 480 GB/s | 320 GB/s | 336 GB/s | 224 GB/s | 512 GB/s | 512 GB/s |
TDP | 300 watts | 250 watts | 250 watts | 250 watts | 180 watts | 250 watts | 165 watts | 275 watts | 275 watts |
Peak Compute | 13.1 TFLOPS | 12.0 TFLOPS | 10.6 TFLOPS | 10.1 TFLOPS | 8.2 TFLOPS | 6.14 TFLOPS | 4.61 TFLOPS | 8.60 TFLOPS | 7.20 TFLOPS |
Transistor Count | ? | 12.0B | 12.0B | 12.0B | 7.2B | 8.0B | 5.2B | 8.9B | 8.9B |
Process Tech | 14nm | 16nm | 16nm | 16nm | 16nm | 28nm | 28nm | 28nm | 28nm |
MSRP (current) | $999 | $1200 | $699 | $1,200 | $599 | $999 | $499 | $649 | $549 |
The Vega FE shares enough of a specification listing with the Fury X that it deserves special recognition. Both cards sport 4096 stream processors, 64 ROPs and 256 texture units. The Vega FE is running at much higher clock speeds (35-40% higher) and also upgrades to the next generation of high-bandwidth memory and quadruples capacity. Still, there will be plenty of comparisons between the two products, looking to measure IPC changes from the CUs (compute units) from Fiji to the NCUs built for Vega.
The Radeon Vega GPU
The clock speeds also see another shift this time around with the adoption of “typical” clock speeds. This is something that NVIDIA has been using for a few generations with the introduction of GPU Boost, and tells the consumer how high they should expect clocks to go in a nominal workload. Normally I would say a gaming workload, but since this card is supposedly for professional users and the like, I assume this applies across the board. So even though the GPU is rated at a “peak” clock rate of 1600 MHz, the “typical” clock rate is 1382 MHz. (As an early aside, I did NOT see 1600 MHz in any of my testing time with our Vega FE but did settle in a ~1440 MHz clock most of the time.)
The 13.1 TFLOPs of peak theoretical compute are impressive, beating out the best cards from NVIDIA including the GeForce GTX 1080 Ti and the Titan Xp. How that translates into gaming or rendering power directly will be seen, but in general AMD cards tend to show higher peak rates for equal “real-world” performance.
Vega Frontier Edition will use a set of two stacks of HBM2, 8GB each, for a total graphics memory allotment of 16GB. Running at 1.89 GHz effective speeds, this gives us a total memory bandwidth of 483 GB/s, lower than the 512 GB/s of the Fury X and lower than the Titan Xp that is rated at 547 GB/s with its GDDR5X implementation.
Power consumption is rated at 300 watts for the air cooled card (that we are testing today) and 375 watts for the water cooled version. That variance is definitely raising some concerns as it would indicate that the air cooled version will be thermally limited in some capacity, allowing the water cooled version to run the GPU at a lower temp, hitting clock speeds closer to the peak 1600 MHz for longer periods.
The price of $999 for the model we are testing today (and $1499 for the water cooled option) plant the Vega FE firmly into the Titan realm. The Titan Xp currently sells from NVIDIA for $1200. Obviously for our testing we are going to be looking at much lower priced GeForce cards (GTX 1070 through the GTX 1080 Ti) but we are doing so purely as a way to gauge potential RX Vega performance.
The Gorgeous Radeon Vega Frontier Edition
Let’s talk about the card itself for a bit. I know that the design has been seen in renderings and at trade shows for a while, but in person, I have to say I am impressed. It will likely be a mixed bag – the color scheme is definitely not neutral and if you don’t appreciate the blue/yellow scheme then no amount of quality craftsmanship will make a difference.
The metal shroud and back plate have an excellent brushed metal texture and look to them and even the edges of the PCB are rounded. The fan color perfectly matches that of the blue hue of the card.
A yellow R logo cube rests on the back corner, illuminating the hardware in an elegant light. The top of the card features the Radeon branding with the same yellow backlight but I do wish the Vega logo on the face of the card did the same.
Even the display connector plate feels higher quality than other cards, with a coated metal finish. It features three full-size DisplayPort and a single HDMI port.
Above the dual 8-pin power connectors you’ll find a GPU Tach, a series of LEDs that increase as the GPU load goes up.
Though the look and style of a graphics card can only take you so far, and can only add so much to the value of a product that 99 times out of 100 ends up in a computer case, it is great to see AMD take such pride in this launch. I can only hope that that consumer variant sees as much attention paid to it.
Great review as always.
Great review as always. please don’t mind the idiots blabbering “bu bu buut it’s not a gaming card!”
So you “overclocked” the card
So you “overclocked” the card by increasing the offset and power limit, but didn’t undervolt the card, so the clocks didn’t even get to stock clocks. Now the 1080 and 1080ti tested, were they both fe or were they aftermarket cards, as thermals are definitely the limiting factor here.
everyone reading this article, please wait for gamers nexus’s video/article, as thats where a valid and proper review of the card will be conducted (and the fact that this is still not a gaming card will be acknowledged)
I heard that the memory is
I heard that the memory is made by Micron? Is Micron making HBM2 now?
SK Hynix and Samsung are
SK Hynix and Samsung are making hbm2, i haven't heard anything about Micron getting into it but not sure.
I mean, but can the added
I mean, but can the added benefits of (enabling) tile-based rasterization, working alongside proper driver optimization really boost their “high-end gaming” cards beyond the 1080 Ti?
There’s a chance, yes.
TBR
There’s a chance, yes.
TBR will reduce power draw, so the card will then hit higher clocks.
At the same time, draw binning can occur using part of the TBR algorithm’s side effects, so even less work can be done per draw, which leads to the draw completing faster, which leads to higher framerates.
How high it will go depends entirely on the GPUs ability to then keep the pipelines filled after an in-flight draw is canceled.
You might see 1% improvement or you might see 20% improvement (even though, technically, the geometry performance may have doubled)… but it won’t be consistent.
You guys do realize that AMD
You guys do realize that AMD has stated there is no performance DELTA between pro and gaming mode with this card, right?
This card’s “gaming mode” is NOT even running Vega optimized drivers. AT ALL.
RX VEGA is launching july 30th/august for a reason. The drivers are not even finished.
This Vega card is running on old GCN drivers not optimized for Vega at all.
At the end of the day, this is a WORKSTATION video card. If You trolls and fanboys had any brains then you would ask yourself how come nobody is doing gaming benchmarks on nvidia quadro workstation cards and drawing conclusions on how well their gaming cards should do? No? cuz You’re literally RETARDED.
no gaming benchmarks on
no gaming benchmarks on Quadro
if you say so
http://cdn.wccftech.com/wp-content/uploads/2016/12/NVIDIA-Pascal-Quadro-P6000_Hitman-2016.png
maybe it’s because those are expensive and Nvidia doesn’t provide samples
or Vega is advertised as a pro card you can game on as well
like a Titan
Since this is one of the few
Since this is one of the few cards with 16 GB of VRAM, could you please run some games at 4K to observe the max VRAM usage?
Rottr and Cod are some games which commit high vram.
Imagine waiting all this time
Imagine waiting all this time for a 1075. Man I hope the RX is faster or at least 200 notes cheaper than I thought it would be.
This level of performance
This level of performance would be decent for about $375.
For the Quadro testing did
For the Quadro testing did you use the latest drivers, or are the numbers for previous review? I remember as of GeForce 378.66 Driver and beyond, new OpenCL updates were added to the drivers. Might be the reason TitanXp scored pretty well in the OpenCL testing vs the Quadro cards.
With OpenCL tests for the
With OpenCL tests for the high-end pro cards you need to find test cases the push the memory more than the raw compute power.
In particular for rendering when you consider most CGI scenes in production you are looking at multiple GB of VRAM needed normally exceeding that of the card itself so the performance of the cards io and caching is of critical importance.
np it cost 450$
np it cost 450$
gaming ver
gaming ver
It’s highly likely that the
It’s highly likely that the gaming drivers are still the modified Fiji drivers that Vega was first shown gaming on.
It would certainly explain why the card behaves like an OC’d Fiji in gaming, and its observed tile based rendering pattern is identical to Fiji, despite Vega totally changing it. Also, the gigantic uplift in pro applications is not present in gaming.
I suspect when the RX drivers drop, this will be an extremely fast card in gaming. Perhaps faster overall than the 1080Ti or Xp.
Thanks for keeping the hype
Thanks for keeping the hype alive 😀
it’s extremely unlikely that
it’s extremely unlikely that RX Vega will perform much better. If it could, AMD would’ve launched it first, instead of letting FE out the door early. They aren’t *that* dense.
I won’t comment on the DX/OGL
I won’t comment on the DX/OGL drivers but the Vulkan driver cannot be too old. Vega FE supports version 1.0.39 which only received driver support in April and is still the current release. This API version didn’t exist yet at the time of demoing in December/January.
It’s the Fiji driver. This
It’s the Fiji driver. This has now been confirmed. AMD support Fiji, hence the latest Vulkan update.
Vega FE is not using Vega drivers for gaming.
If it is true that they are
If it is true that they are using the Fiji code base with minimal changes at this point then i would expect significant improvements given this is the first AMD display driver chipset that supports packed math.
That means they will need to do a lot of adjustment to the Fiji driver code to make use of it but if they do they will be able to pack multiple operations into fewer clock cycles. This could bring a very big boost to performance if it is the case.
I observed that NVIDIA
I observed that NVIDIA disable the geometry tile caching (introduced in Maxwell) on their Quadro card. My guess is that pro people like to throw a lot of triangle at the cards which would result in constant flushing of the tile cache once it’s full.
My guess is that if AMD target this card for pro people, the geometry tile caching introduced in Vega is also disabled on that card. That could lead to significant performance improvements on the gamer variation of this card.
To test this theory, I would say run the gaming benchmark on a Quadro and a GeForce that use the same GPUs.
This said, geometry tile caching is phenomenal (see GeForce GTX 750 release a the time!) but I hardly imagine it’s going to be enough to do significantly better than a GeForce GTX 1080.
Thank you Ryan for this i do
Thank you Ryan for this i do appreciate the review 🙂
Any chance of seeing more
Any chance of seeing more purely synthetic kinds of results?
I think many of us are quite curious to see how it performs in pixel/texel fillrate tests and tessellation/geometry tests. I think they would reveal much about the architecture.
Yer would be interesting to
Yer would be interesting to see how the HBMC compares as well, render out a scene with some super massive textures maybe.
I hate your graphs, why can’t
I hate your graphs, why can’t you just do a normal FPS chart like everyone else to go along with your current ones?
Frame times are much more
Frame times are much more informative than FPS, FPS is always an AVG (over at min 1second)
But frame times are mutch better at showing if there visual stutters.
for example, if you have a 1 second period with 100fps and there are 5 frames that are slower than normal on the Avg you will not really notice since all 100frames rendered in the second but some were much much slower so you have a long 30ms game with no update on screen.
calm your 8008135 Marcelo
calm your 8008135 Marcelo Viana
lol, ok you’re right, even i
lol, ok you’re right, even i thinking that my brain is more capable than that, well, maybe just a little.
i have to confess that i’m really Anxious. i’m waiting for this new arch for so long, you know…
About what I expected.
I
About what I expected.
I think this will pass the 1080 with a bit of driver optimization and as any AMD card, it will get better with age.
Great Content!
I realise its
Great Content!
I realise its still early days for the Vega Pro FE but nobody seems to be testing the impact of AMD’s “Infinity Fabric”.
AMD released a video demo of DOOM (i think) using an artificial card with 2GB VRAM which then utilised the unused System Memory/Unused Storage capacity installed and increased both the Minimum Frame Rate (by upto 100%) and Maximum Frame Rate (by upto 50%).
Would it be worthwhile to attempt to test this feature in a system – say with 8GB RAM V 32/64GB RAM (or more in and X series board) and perhaps 2 x NVMe drives installed (one empty) – testing whether “Infinity Fabric” actually makes any impact??
Apologies, I meant to say
Apologies, I meant to say “High Bandwidth Cache Controller” NOT “Infinity Fabric”
WinHEC 2006 commitment was
WinHEC 2006 commitment was finally fulfilled by VEGA
WDDM v2 and Beyond [WinHEC 2006; 1.81 MB]
http://download.microsoft.com/download/5/b/9/5b97017b-e28a-4bae-ba48-174cf47d23cd/pri103_wh06.ppt
Hey Ryan,
I honestly think
Hey Ryan,
I honestly think your Fury X 1440p graph is very old, there is no stuttering on new drivers. If I’m wrong, sorry I don’t have FCAT 🙂 Checked with Afterburner on-screen frametime graph.
… and I mean Rise of The
… and I mean Rise of The Tomb Raider 1440p test.