An interesting night of testing
We have the first review!
Last night I did our first ever live benchmarking session using the just-arrived Radeon Vega Frontier Edition air-cooled graphics card. Purchased directly from a reseller, rather than being sampled by AMD, gave us the opportunity to run testing for a new flagship product without an NDA in place to keep us silenced, so I thought it would be fun to the let the audience and community go along for the ride of a traditional benchmarking session. Though I didn’t get all of what I wanted done in that 4.5-hour window, it was great to see the interest and excitement for the product and the results that we were able to generate.
But to the point of the day – our review of the Radeon Vega Frontier Edition graphics card. Based on the latest flagship GPU architecture from AMD, the Radeon Vega FE card has a lot riding on its shoulders, despite not being aimed at gamers. It is the FIRST card to be released with Vega at its heart. It is the FIRST instance of HBM2 being utilized in a consumer graphics card. It is the FIRST in a new attempt from AMD to target the group of users between gamers and professional users (like NVIDIA has addressed with Titan previously). And, it is the FIRST to command as much attention and expectation for the future of a company, a product line, and a fan base.
Other than the architectural details that AMD gave us previously, we honestly haven’t been briefed on the performance expectations or the advancements in Vega that we should know about. The Vega FE products were released to the market with very little background, only well-spun turns of phrase emphasizing the value of the high performance and compatibility for creators. There has been no typical “tech day” for the media to learn fully about Vega and there were no samples from AMD to media or analysts (that I know of). Unperturbed by that, I purchased one (several actually, seeing which would show up first) and decided to do our testing.
On the following pages, you will see a collection of tests and benchmarks that range from 3DMark to The Witcher 3 to SPECviewperf to LuxMark, attempting to give as wide a viewpoint of the Vega FE product as I can in a rather short time window. The card is sexy (maybe the best looking I have yet seen), but will disappoint many on the gaming front. For professional users that are okay not having certified drivers, performance there is more likely to raise some impressed eyebrows.
Radeon Vega Frontier Edition Specifications
Through leaks and purposeful information dumps over the past couple of months, we already knew a lot about the Radeon Vega Frontier Edition card prior to the official sale date this week. But now with final specifications in hand, we can start to dissect what this card actually is.
Vega Frontier Edition | Titan Xp | GTX 1080 Ti | Titan X (Pascal) | GTX 1080 | TITAN X | GTX 980 | R9 Fury X | R9 Fury | |
---|---|---|---|---|---|---|---|---|---|
GPU | Vega | GP102 | GP102 | GP102 | GP104 | GM200 | GM204 | Fiji XT | Fiji Pro |
GPU Cores | 4096 | 3840 | 3584 | 3584 | 2560 | 3072 | 2048 | 4096 | 3584 |
Base Clock | 1382 MHz | 1480 MHz | 1480 MHz | 1417 MHz | 1607 MHz | 1000 MHz | 1126 MHz | 1050 MHz | 1000 MHz |
Boost Clock | 1600 MHz | 1582 MHz | 1582 MHz | 1480 MHz | 1733 MHz | 1089 MHz | 1216 MHz | – | – |
Texture Units | ? | 224 | 224 | 224 | 160 | 192 | 128 | 256 | 224 |
ROP Units | 64 | 96 | 88 | 96 | 64 | 96 | 64 | 64 | 64 |
Memory | 16GB | 12GB | 11GB | 12GB | 8GB | 12GB | 4GB | 4GB | 4GB |
Memory Clock | 1890 MHz | 11400 MHz | 11000 MHz | 10000 MHz | 10000 MHz | 7000 MHz | 7000 MHz | 1000 MHz | 1000 MHz |
Memory Interface | 2048-bit HBM2 | 384-bit G5X | 352-bit | 384-bit G5X | 256-bit G5X | 384-bit | 256-bit | 4096-bit (HBM) | 4096-bit (HBM) |
Memory Bandwidth | 483 GB/s | 547.7 GB/s | 484 GB/s | 480 GB/s | 320 GB/s | 336 GB/s | 224 GB/s | 512 GB/s | 512 GB/s |
TDP | 300 watts | 250 watts | 250 watts | 250 watts | 180 watts | 250 watts | 165 watts | 275 watts | 275 watts |
Peak Compute | 13.1 TFLOPS | 12.0 TFLOPS | 10.6 TFLOPS | 10.1 TFLOPS | 8.2 TFLOPS | 6.14 TFLOPS | 4.61 TFLOPS | 8.60 TFLOPS | 7.20 TFLOPS |
Transistor Count | ? | 12.0B | 12.0B | 12.0B | 7.2B | 8.0B | 5.2B | 8.9B | 8.9B |
Process Tech | 14nm | 16nm | 16nm | 16nm | 16nm | 28nm | 28nm | 28nm | 28nm |
MSRP (current) | $999 | $1200 | $699 | $1,200 | $599 | $999 | $499 | $649 | $549 |
The Vega FE shares enough of a specification listing with the Fury X that it deserves special recognition. Both cards sport 4096 stream processors, 64 ROPs and 256 texture units. The Vega FE is running at much higher clock speeds (35-40% higher) and also upgrades to the next generation of high-bandwidth memory and quadruples capacity. Still, there will be plenty of comparisons between the two products, looking to measure IPC changes from the CUs (compute units) from Fiji to the NCUs built for Vega.
The Radeon Vega GPU
The clock speeds also see another shift this time around with the adoption of “typical” clock speeds. This is something that NVIDIA has been using for a few generations with the introduction of GPU Boost, and tells the consumer how high they should expect clocks to go in a nominal workload. Normally I would say a gaming workload, but since this card is supposedly for professional users and the like, I assume this applies across the board. So even though the GPU is rated at a “peak” clock rate of 1600 MHz, the “typical” clock rate is 1382 MHz. (As an early aside, I did NOT see 1600 MHz in any of my testing time with our Vega FE but did settle in a ~1440 MHz clock most of the time.)
The 13.1 TFLOPs of peak theoretical compute are impressive, beating out the best cards from NVIDIA including the GeForce GTX 1080 Ti and the Titan Xp. How that translates into gaming or rendering power directly will be seen, but in general AMD cards tend to show higher peak rates for equal “real-world” performance.
Vega Frontier Edition will use a set of two stacks of HBM2, 8GB each, for a total graphics memory allotment of 16GB. Running at 1.89 GHz effective speeds, this gives us a total memory bandwidth of 483 GB/s, lower than the 512 GB/s of the Fury X and lower than the Titan Xp that is rated at 547 GB/s with its GDDR5X implementation.
Power consumption is rated at 300 watts for the air cooled card (that we are testing today) and 375 watts for the water cooled version. That variance is definitely raising some concerns as it would indicate that the air cooled version will be thermally limited in some capacity, allowing the water cooled version to run the GPU at a lower temp, hitting clock speeds closer to the peak 1600 MHz for longer periods.
The price of $999 for the model we are testing today (and $1499 for the water cooled option) plant the Vega FE firmly into the Titan realm. The Titan Xp currently sells from NVIDIA for $1200. Obviously for our testing we are going to be looking at much lower priced GeForce cards (GTX 1070 through the GTX 1080 Ti) but we are doing so purely as a way to gauge potential RX Vega performance.
The Gorgeous Radeon Vega Frontier Edition
Let’s talk about the card itself for a bit. I know that the design has been seen in renderings and at trade shows for a while, but in person, I have to say I am impressed. It will likely be a mixed bag – the color scheme is definitely not neutral and if you don’t appreciate the blue/yellow scheme then no amount of quality craftsmanship will make a difference.
The metal shroud and back plate have an excellent brushed metal texture and look to them and even the edges of the PCB are rounded. The fan color perfectly matches that of the blue hue of the card.
A yellow R logo cube rests on the back corner, illuminating the hardware in an elegant light. The top of the card features the Radeon branding with the same yellow backlight but I do wish the Vega logo on the face of the card did the same.
Even the display connector plate feels higher quality than other cards, with a coated metal finish. It features three full-size DisplayPort and a single HDMI port.
Above the dual 8-pin power connectors you’ll find a GPU Tach, a series of LEDs that increase as the GPU load goes up.
Though the look and style of a graphics card can only take you so far, and can only add so much to the value of a product that 99 times out of 100 ends up in a computer case, it is great to see AMD take such pride in this launch. I can only hope that that consumer variant sees as much attention paid to it.
I guess I do not have to feel
I guess I do not have to feel any remorse about buying G-Sync monitor for yet another year.
There is still this FLOPS-to-performance ratio that works against AMD cards of recent generations.
RX580 needs 6 tflops to match (or not even, depending on review site) a 4 tflops GTX 1060.
In the same vein, if I take 13.1 tflops VEGA, but adjust it to actual frequency of 1440 instead of 1600, i.e. 11.8 tflops, and then apply 2/3 multiplier…
… I get 7.9 tflops … while GTX 1080 has 8.2 and GTX 1070 has 5.9. And this is where VEGA roughly lands.
With so many proclaimed architectural improvements, there had to be potential to improve it, however. For example:
– Deferred rendering
– Use of shader array for geometry (where AMD historically lacked) – especially under DX11, this might be implementable so that it is transparent to the game, and then it must be absolute beast at tesselation
Does any of that work?
I think some of the gaming
I think some of the gaming specific improvements aren’t working fully yet, like tile based rasterization isn’t working fully yet, and I heard somewhere that geometry culling isn’t working fully yet
My Titan X Pascal runs
My Titan X Pascal runs cinebench R15 OpenGL at 186 FPS @ 1418mhz ~30-50% GPU load. How did you get a Titan Xp, which is faster, to run so slow?
That’s interesting, not sure.
That's interesting, not sure. We ran the test several times however.
Cinebench is more cpu
Cinebench is more cpu benchmark than opengl benchmark(Yes even opengl part of it). Out of curiosity, have you tried to use application profile that drivers would even try to load gpu on that bench?
I want to say, great
I want to say, great livestream, even if it was live 🙂
Keep up good reviews.
Hey, thanks!
Hey, thanks!
Conclusion
A good
Conclusion
A good professionnal card on part with P5000 at half the price.
And you could game on it…
But you can game on a P5000
But you can game on a P5000 or P4000 as well, and these number tell us the gaming on those parts would be better than the Vega FE.
Please don’t compare this
Please don’t compare this card with P4000/P5000, Nvidia has hell lot of specific workstation features… You know it. Quadro always shown firepro it’s place workstation market.
Titan is targeted for Ultra High End gaming + individual researchers/ small business house for compute/WS Applications.
Vega FE seems to be targeting only later half of Titan’s Market.
Anyways Pascal is sufficient for Vega.
So you’re saying to handicap
So you’re saying to handicap it by comparing it to a sector that it’s not supposed to be compared to? It’s intended to rival the Quadro lineup, and that’s quite clear based on benchmarks, and going to the official Frontier Edition page on AMD.
This is in no way a rival to a Titan or 1080ti, the scores show that, it cleans the Titan’s clock in many various workstation applications, and the Titan cleans its clock in gaming applications, it tends to go neck and neck more with the P4000 and P5000 in performance, and actually is arguably a much better purchase than the P5000 due to costing half the price, and going neck and neck with it in various benchmarks.
That’s like saying we should compare a 1080ti to an RX 560, sure they can both perform the same functions, but come on, they clearly have different intentions, and that’s transparent. It’s ignorant to say the two are in the same sector, as it is ignorant to say the FE doesn’t belong with the P5000 and P4000, and as all the professional tier benchmarks has shown, it fights amazingly in those regions with it’s price/perf, as where in gaming, it’s an overpriced 1070.
So you’re saying to handicap
So you’re saying to handicap it by comparing it to a sector that it’s not supposed to be compared to? It’s intended to rival the Quadro lineup, and that’s quite clear based on benchmarks, and going to the official Frontier Edition page on AMD.
This is in no way a rival to a Titan or 1080ti, the scores show that, it cleans the Titan’s clock in many various workstation applications, and the Titan cleans its clock in gaming applications, it tends to go neck and neck more with the P4000 and P5000 in performance, and actually is arguably a much better purchase than the P5000 due to costing half the price, and going neck and neck with it in various benchmarks.
That’s like saying we should compare a 1080ti to an RX 560, sure they can both perform the same functions, but come on, they clearly have different intentions, and that’s transparent. It’s ignorant to say the two are in the same sector, as it is ignorant to say the FE doesn’t belong with the P5000 and P4000, and as all the professional tier benchmarks has shown, it fights amazingly in those regions with it’s price/perf, as where in gaming, it’s an overpriced 1070.
I have been watching A.I
I have been watching A.I hardware for awhile and this new AMD Frontier Card intrigues me.
The issue that it is slow is that it is not using tile rasterizer and it is not hitting above 1400 on the GPU and the boost clock is 1600.
If the card had a 1600 clock/Better Drivers(including more optimizations/tile rasterizer turned on). I would say a 25% to 30% inprovement so it will be a little faster sometimes/a little slower sometimes than a 1080TI
I like the LED color control
I like the LED color control switch. There is a red setting and a blue setting. No green setting. I bet they’re RGB LEDs. Can you imagine the meeting the decide that? “Why not green?” “Screw you, that’s why!”
To be perfectly fair
You
To be perfectly fair
You should give us the gaming results comparing with TitanXp and P5000….(the ones this card is fighting for)
I don’t disagree. However, it
I don't disagree. However, it just wasn't necessary to show the performance levels of Vega FE. I did not originally plan to include GTX 1070 in my results. Only after I saw where Vega FE gaming results were sitting did I realize I had to include it.
Hopefully with Vega FE
Hopefully with Vega FE released to market there can be some development on the benchmarking/testing software front like some very very huge Billion Polygon mesh model scenes with debug testing on to measure how Vega FE manages that HBM2/CACHE via the HBCC and the GPUs entire cache subsystems and the moving data/textures/etc. between HBM2, regular DRAM and even SSD virtual paged GPU memory.
I’m very interested in knowing just how effectively Vega FE can manage very large 3D modeled scenes that are larger than the available HBM2/Cache’s 16GB/whatever size of Video memory space. Hopefully Vega’s HBCC can manage say swapping data/textures/GPU kernels into and out of that HBM2/Cache and allow for 3D scenes of sizes that are much larger than 16GB(Or whatever the HBM2 memory size is for various Vega SKUs. If Vaga can manage large amounts of data/textures swapped out to regular DDR4 DRAM of say 64GB and the HBCC can effectively manage the task of swapping between HBM2/cache and regular DRAM/paged GPU memory in the background then that will be great for 3D animated production workloads.
The Vega micro-arch has many new features over Polaris/older GCN micro-archs so that implies the usual development/optimization period after any new GPU micro-arch is released. Let the tweaking and testing begin because there is so much new IP to optimize for with Vega.
I watched the WHOLE live
I watched the WHOLE live stream. I liked it a lot, Thank you for doing that. Great job as always Ryan and PCPER!
Awesome, thanks! We had a
Awesome, thanks! We had a surprising number of people check in on it!
+1
I really enjoyed watching
+1
I really enjoyed watching the replay!
Is actually sad that most of
Is actually sad that most of the pages are about games.
-We have HBCC to see, but no. how it perform in that game?
-terabytes HBCC can see, lets figure out a way to see how it works. not now, we need to see how that other game runs.
-for a card that is made to creation let’s see what rocm can do with this cards. rocm? what is it? we have to see this other game.
– lets put a bunch of cameras and see if this card can really handle. Nope, another game.
-ok, ok just put some sinthetic benchmark that anyone can do by themselves and call it a day.
I know most of the audiences is gamers, as you well put no problem in do that, but for god sake, for a professional stand point, do a review of the card for people that are really interest on the card and not in preview what a future rx card gonna perform, it’s useless really.
a lot of new tech on this card like:
HBCC- find way to test at least 1 terabyte of virtual adress space.
NGC- 8bits(512 op per clock), 16bits(256 op per clock) and 32bits(128 op per clock) see if it scales, or its just bullS@it propaganda. since nvidia felt necessary to build the propper 8 bit processor, instead of use cuda.
next gen pixel engine- this kind of performance we want to see.
programable geometry pipeline- create a very complex mesh and see the performance of the thing.
and test the F@cking free sync 2.
I ask so much? Well maybe, but is all about the card, is actually what this card brings for godsake.
And before i forget, thank you to bring this game performance and some pro sinthetic performance benchmark, even much less than i expect i really apreciate.
Still waiting for a real review of the vegaFE card.
Honestly, I’m totally with
Honestly, I'm totally with you on this. But for now we have no way to evaluate some of those use cases and no input from AMD on how to do so.
Thank you Ryan, i’m glad we
Thank you Ryan, i’m glad we are in the same page here. i’m wating for a full review and hoping AMD get out of the cave, and give some. It’s us consumers, Interested in their product that claiming for God sake.
sad to see DVI been phased
sad to see DVI been phased out on some new cards, If they really want to reduce flexibility in ports I would prefer hdmi be ditched. I personally use DVI as my primary. I dont see the benefit of 3 display ports instead of say 2 display ports and a dual link dvi.
Before anyone asks why, well first off we all dont want to buy a new monitor to just use a GPU, is many capable screens out there without display ports, also my current monitor even tho has a display port has a nasty bug where the port gets stuck in sleep mode.
DVI and HDMI are pin
DVI and HDMI are pin compatible. Buy a passive adapter and quit whining.
I think the FE even comes
I think the FE even comes with a SL adapter in the box.
Hey Ryan – great job as
Hey Ryan – great job as always! Anyway, can you guys can do something about the size the blurriness of the font on your graphs. It’s super hard to read.
For reference I’ve been viewing your site on Dell 27″ Ultrasharps for longer then I can remember and that is the only thing that lets it down.
Cheers – keep on rocking!
Yah, that font needs to be
Yah, that font needs to be addressed. Legacy scripting… I'll try to work on it!
So if the frequency did not
So if the frequency did not dip during the voltage drops in ROTR, did the FPS drop as well during those periods? If it can run the card at those freqs with lesser voltage, might that suggest some kind of bug with their voltage curves?
Frequency did not appear to
Frequency did not appear to drop, though our monitoring tool may have been running at a low enough resolution to catch it.
was the average clock for the
was the average clock for the nvidia GPUs mentioned?
Not in this story – but the
Not in this story – but the 1080 Ti review and others list those each card.
I am really interested in the
I am really interested in the HBC implementation, can you your guys run some test on this awesome new tech?
Does the Radeon Vega Frontier
Does the Radeon Vega Frontier Edition have Solidworks certified drivers for realview? Does it have certified drivers for *any* professional CAD application? Thanks!
Not yet, due to the new
Not yet, due to the new architecture with the features such as packed math etc, I think AMD want the card in the hands of the developers first.
but given it si being branded under the Radeon pro line (look at the webdomain) i think they will add them later but want collaboration with the tool makers.
So you didn’t use 100%
So you didn’t use 100% maximum possible crazy settings in GTA-V. Anti-Aliasing (MSAA) at half, Post-Processing at high instead of ultra.. and didn’t even show us what you had set in the advanced graphics setting.
Thanks for a shitty review. It’s so hard to find people that actually run games 100% completely maxed out in all possible settings these days.
Nevermind we do get to see in
Nevermind we do get to see in advanced, I missed that.. but still.. not maxed out everywhere.
Max out GTA settings –
Max out GTA settings – extended distance scaling, in particular – and you will end up measuring CPU performance, not GPU performance. It’s really not a particularly clever idea.
And even at these settings, it’s still identical workloads being compared.
Oh, I see that extended
Oh, I see that extended distance is maxed in this test. If so, the big difference in the test results between AMD and Nvidia is likely down to the less than ideal Radeon DX11 driver.
Ughh…. why are there no
Ughh…. why are there no RX480/580’s in these benchmarks?
What would be the point of
What would be the point of cluttering up the graphs with a significantly slower GPU?
It would be useful, so you
It would be useful, so you could check whether Vega was performing consistently with other polaris architecture, in terms of IPC compared to Fiji (which seems to somehow have gone down).
Great review Ryan. The
Great review Ryan. The results seem puzzling. The fact that the Vega product is not, at minimum, consistenly matching a GTX 1080 is worrisome. I’d be curious to see what AMD has to say about this in due course, and what they have to say at SIGGRAPH regarding the ‘RX’ variant.
The livestream was interesting too; fantastic to be able to see the benchmarking in progress and hear some of your input too. Thanks!