Specifications and Design
We dive into the liquid-cooled version of Vega and find it performs better than expected.
Just a couple of short weeks ago we looked at the Radeon Vega Frontier Edition 16GB graphics card in its air-cooled variety. The results were interesting – gaming performance proved to fall somewhere between the GTX 1070 and the GTX 1080 from NVIDIA’s current generation of GeForce products. That is under many of the estimates from players in the market, including media, fans, and enthusiasts. But before we get to the RX Vega product family that is targeted at gamers, AMD has another data point for us to look at with a water-cooled version of Vega Frontier Edition. At a $1500 MSRP, which we shelled out ourselves, we are very interested to see how it changes the face of performance for the Vega GPU and architecture.
Let’s start with a look at the specifications of this version of the Vega Frontier Edition, which will be…familiar.
Vega Frontier Edition (Liquid) | Vega Frontier Edition | Titan Xp | GTX 1080 Ti | Titan X (Pascal) | GTX 1080 | TITAN X | GTX 980 | R9 Fury X | |
---|---|---|---|---|---|---|---|---|---|
GPU | Vega | Vega | GP102 | GP102 | GP102 | GP104 | GM200 | GM204 | Fiji XT |
GPU Cores | 4096 | 4096 | 3840 | 3584 | 3584 | 2560 | 3072 | 2048 | 4096 |
Base Clock | 1382 MHz | 1382 MHz | 1480 MHz | 1480 MHz | 1417 MHz | 1607 MHz | 1000 MHz | 1126 MHz | 1050 MHz |
Boost Clock | 1600 MHz | 1600 MHz | 1582 MHz | 1582 MHz | 1480 MHz | 1733 MHz | 1089 MHz | 1216 MHz | – |
Texture Units | ? | ? | 224 | 224 | 224 | 160 | 192 | 128 | 256 |
ROP Units | 64 | 64 | 96 | 88 | 96 | 64 | 96 | 64 | 64 |
Memory | 16GB | 16GB | 12GB | 11GB | 12GB | 8GB | 12GB | 4GB | 4GB |
Memory Clock | 1890 MHz | 1890 MHz | 11400 MHz | 11000 MHz | 10000 MHz | 10000 MHz | 7000 MHz | 7000 MHz | 1000 MHz |
Memory Interface | 2048-bit HBM2 | 2048-bit HBM2 | 384-bit G5X | 352-bit | 384-bit G5X | 256-bit G5X | 384-bit | 256-bit | 4096-bit (HBM) |
Memory Bandwidth | 483 GB/s | 483 GB/s | 547.7 GB/s | 484 GB/s | 480 GB/s | 320 GB/s | 336 GB/s | 224 GB/s | 512 GB/s |
TDP | 300 watts ~350 watts |
300 watts | 250 watts | 250 watts | 250 watts | 180 watts | 250 watts | 165 watts | 275 watts |
Peak Compute | 13.1 TFLOPS | 13.1 TFLOPS | 12.0 TFLOPS | 10.6 TFLOPS | 10.1 TFLOPS | 8.2 TFLOPS | 6.14 TFLOPS | 4.61 TFLOPS | 8.60 TFLOPS |
Transistor Count | ? | ? | 12.0B | 12.0B | 12.0B | 7.2B | 8.0B | 5.2B | 8.9B |
Process Tech | 14nm | 14nm | 16nm | 16nm | 16nm | 16nm | 28nm | 28nm | 28nm |
MSRP (current) | $1499 | $999 | $1200 | $699 | $1,200 | $599 | $999 | $499 | $649 |
The base specs remain unchanged and AMD lists the same memory frequency and even GPU clock rates across both models. In practice though, the liquid cooled version runs at higher sustained clocks and can overclock a bit easier as well (more details later). What does change with the liquid cooled version is a usable BIOS switch on top of the card that allows you to move between two distinct power draw states: 300 watts and 350 watts.
First, it’s worth noting this is a change from the “375 watt” TDP that this card was listed at during the launch and announcement. AMD was touting a 300-watt and 375-watt version of Frontier Edition, but it appears the company backed off a bit on that, erring on the side of caution to avoid breaking any of the specifcations of PCI Express (board slot or auxiliary connectors). Even more concerning is that AMD chose to have the default state of the switch on the Vega FE Liquid card at 300 watts rather than the more aggressive 350 watts. AMD claims this to avoid any problems with lower quality power supplies that may struggle to hit slightly over 150 watts of power draw (and resulting current) from the 8-pin power connections. I would argue that any system that is going to install a $1500 graphics card can and should be prepared to provide the necessary power, but for the professional market, AMD leans towards caution. (It’s worth pointing out the RX 480 power issues that may have prompted this internal decision making were more problematic because they impacted the power delivery through the motherboard, while the 6- and 8-pin connectors are generally much safer to exceed the ratings.)
Even without clock speed changes, the move to water cooling should result in better and more consistent performance by removing the overheating concerns that surrounded our first Radeon Vega Frontier Edition review. But let’s dive into the card itself and see how the design process created a unique liquid cooled solution.
The Radeon Vega Frontier Edition Liquid Cooled Card
The liquid cooled card shares dimensions with the air-cooled card, though without an integrated blower fan, the likeness stops there. The color scheme is reversed, with a yellow brushed metal body and blue accents and illumination. The top Radeon logo and the blue R cube on the end light up in blue, and as I stated on Twitter, I really hate blue LEDs. They are just uncomfortable to my eyes and I know I’m not the only one. Otherwise, the design of this card is just as sexy as the first Vega FE we looked at.
It still requires a pair of 8-pin power connections to run and the liquid cooling tubing and power to the radiator comes from the front of the card. There is plenty of length to the tubing and cabling, allowing for installation in nearly any chassis configuration.
On the back is a full cover back plate with an exposed area for the GPU tach, a set of LEDs that defaults to blue and indicates the GPU workload of the card. The blue on these is particularly piercing…
Internally we have a unique liquid cooler design. On the left is the pump and block covering the GPU and HBM2 stacks and a blue block covering the power delivery on the card as well. Liquid flows in from the top into the GPU block, through the GPU block outlet on the upper right, down through the VRM cooling, around to the far left, and the back out to the radiator.
This unit on the right is part of the diaphragm pump design that makes this card interesting. Think of this is as a flexible reservoir with a high-tension spring to create pressure back into the system. A diaphragm pump works with one-way check valves and reciprocating diaphragm material to create alternating areas of pressure and vacuum. The T-split you see at the top of the primary pump allows the liquid stored in the overflow area to maintain reliable usage of the cooler through the course of natural evaporation of fluid. This is very similar the kinds of pumps used in fish tanks and artificial hearts, likely a more expensive solution than was found on the Radeon Pro Duo or Fury X as an attempt to correct the deficiencies of older generations (noise, reliability).
This kind of cooler design was only made possible by the extended PCB of the Vega Frontier Edition, either by design or as a happy accident. The noise made by this pump is very different than traditional AIO coolers we have used in the office, more of a “gurgle” than any kind of “whine”. It’s more muted than the Radeon Pro Duo or Fury X, that’s for certain.
LOL, my maybe next video card
LOL, my maybe next video card will gurgle when it gets excited!
Then I read the review. My
Then I read the review. My next card will not be gurgling.
Waaahhh, I’m angry because
Waaahhh, I’m angry because there’s a thing I can buy.
I’m not angry. I was trying
I’m not angry. I was trying to a bit funny.
Waaahhh, I’m angry because
Waaahhh, I’m angry because there’s a thing I can buy.
AMD made it clear that RX
AMD made it clear that RX will be faster in games. The FE was designed as a professional card that does have gaming drivers as well. It might be a mistake to assume that RX will perform the same.
I cannot prove it, yet it cannot be disproven either. The answer is that we will have to wait and see. We still might be very surprised when RX arrives. It is not too long from now. This LC version is surely impressive. I would have liked to see the pro performance, sad to not see it here, not even one.
Ah yes, we will see, but what
Ah yes, we will see, but what will AMD fans say once it does exactly the same as FE in gaming?
If RX Vega is between the GTX
If RX Vega is between the GTX 1080 and the GTX 1080 TI, that’s good enough for most. If you want the top FPS performance and do not care about the compute, then by all means go with Nvidia and part with more than a few of your Benjamins. But Nvidia has no x86 license to speak of and all those Radeon Pro WX(Formally FirePro branded) GPU SKUs and Radeon instinct mi25 AI GPU SKUs will go very well package priced with the Zen/Epyc workstation/server/HPC CPU SKUs.
So Lisa and Raja can both laugh all the way to the bank with that Zen-Epyc/Vega(Radeon WX/Instinct) combo deals and those mad professional market revenues that those professional SKUs will produce.
having no x86 license is not
having no x86 license is not really a problem to nvidia. for professional solution price is not #1 metric to look for unlike how it was with regular consumer. they most important metric is always about if you can do the job or not. that’s why nvidia able to sell their solution at such high price to begin with.
also Vega 10 lacked FP64 performance. so for HPC class of machine even if they end up using AMD CPU the GPU portion will still very likely to end up with nvidia tesla due to the need of massive FP64 performance.
Nvidia’s not having any x86
Nvidia’s not having any x86 license means that AMD can and will package price its Epyc SKUs with It’s Radeon Pro WX/Radeon Instinct mi25/other AI SKUs. And Nvidia can only price its GPU SKUs. So Nvidia can not offer any CPU/GPU package deals, and that includes also any server/workstation motherboard deals also with AMD and its motherboard partners for added Epyc CPU, Radeon Vega WX/Instinct GPUs, and motherboard package deals.
Now Nvidia has some of its power9 customers but there also AMD’s founding membership in the IBM/partners created OpenCAPI standards group with IBM, AMD, and others. So AMD will also support OpenCAPI with its GPUs and be able to get some GPU business with any Power9 OpenPower licensees, the Power9 SKUs all support IBM’s/OpenCAPI group’s OpenCAPI coherent interface standard. But currrently most of HPC/Server/Workstation market is based around the x86 32/64 bit ISA and Epyc sales will get AMD some extra Radeon Pro WX/Radeon Instinct sales.
You are going to have to refrence your statment with some form of GPU double percision FP comparsion tables.
The Peak Double Precision Compute Performance of 819 GFLOPS(?) for Vega FE according to Reddit but Vega FE is not a Radeon Pro Branded WX SKU, so who knows about any professional compute products that AMD may have at the moment. And again Wikipedia’s Quadro DP numbers are lacking and you can not at least provide some Quadro FP figures to back up your statment.
What about the Radeon Instinct mi25 and AI/Infrencing workloads, and there will be Epyc/Vega systems using radeon Vega WX 9100 SKUs for double precision workloads and the clock rates on any professional cards will be lower.
I wish someone would publish a definitive table of GPU double percision FP metrics for all relevent GPUs, and Wikipedia appears to be in the business of collecting donations and spaffing the money on non essential uses. If the Radeon Vega WX 9100 has the same FP figures(Don’t Know) then maybe the cost will be lower than any GP100 based Quadro, or there will most likely be Dual Vega designs, who Knows. But currently without some DP FP GPU comparsion tables that list all of the most relevent GPUs and their respective true DP floating point metrics it’s hard to say. And not all HPC workloads require double percision FP, including the AI/infrencing workloads.
Steve Burke over at GamersNexus has just published some undervolting benchmarks so some improvment with an air cooled Vega FE under-volted, etc.
See article titled:
“Fixing Vega FE: Undervolting to Improve Performance & Power Draw”
“Nvidia’s not having any x86
“Nvidia’s not having any x86 license means that AMD can and will package price its Epyc SKUs with It’s Radeon Pro WX/Radeon Instinct mi25/other AI SKUs”
as i said it is not a problem. professional client look at solution first not the package. those whole package is useless if AMD cannot provide the solution needed by the professional client.
“You are going to have to refrence your statment with some form of GPU double percision FP comparsion tables.”
this is directly from AMD page:
https://instinct.radeon.com/en-us/product/mi/radeon-instinct-mi25/
AMD directly states that M125 FP64 is rated at 1/16 of the card FP32 (798Gflops). the info is all there all along on AMD product page.
The MI25 is for AI/Infrencing
The MI25 is for AI/Infrencing workloads so the 24.6 TFLOPS FP16 metric is what is going to be used for those AI/Infrencing workloads and try and have the figures and such listed in your original reply. Also do you know of any full tabular listings on the web that lists explicitly all the makers GPU’s FP64 numbers, because the Wikipedia folks need to be called out on their crappy GPU listings. And what about any reviews that start out with comparing Nvidia’s ROP/TMU counts with AMD’s ROP/TMU counts because that’s never done mostly. And Nvidia, what about Nvidia’s top rated FP64 bit SKUs where are those FP64 numbers Wikipedia? AMD does need to develop some tensor processor IP of its own, as Nvidia has added that to its professional Volta offerings.
Professional Clients will price compare for their entire server systems and any clients doing AI/Infrencing, or other workloads, will always look at the total cost of the hardware and the total costs of ownership, power usage included.
So Nvidia not having any direct pricing influence over the CPU hardware part puts Nvidia at a disadvantage compared to AMD. To add to that AMD can and will package price it’s Radeon WX/Instinct brand of Professional GPU products in some package offerings with its Epyc professional CPU offerings, so that’s an advantage for many professional solutions that Nvidia does not have, and I’ll include the AMD server motherboard pricing with AMD negotiating with it’s motherboard partners some better package pricing to seal the deals.
All the servers have to have CPUs, so AMD has that x86 advantage in a market that uses mostly the x86 32/64 bit ISA, and no Nvidia offerings there, ditto for server motherboards for Epyc server/HPC/workstation systems. Nvidia can only directly price its GPU SKUs while AMD has direct control over CPU and GPU pricing, and some control over its motherboard pricing for its server platforms.
AMD also has all of its SeaMicro server business IP, that AMD can now license to any of the server OEMs that make servers based on any Epyc SKUs. AMD may have shut down its SeaMicro server builder business but that IP remains for licensing to any of AMD partners that create complete server systems based on AMD’s Epyc/other IP.
It’s going to be AMD working with the server OEMs to price all of a server’s processor hardware parts and AMD has a triple threat there CPU, GPU, Motherboard(Chipsets on the Epyc SOC). Those 128 PCIe lanes that Epyc 1P and 2P processors offer are via the Zeppelin dies so that’s a saviings for MB makers right there.
The Infinity Fabric supported on the Epyc and Vega processors is similar in scope to what Nvidia offers with its NVLink technology and neither AMD or Intel will be using NVLink for any CPU to GPU coherent interfacing and Intel does not use the Infinity fabric. So that will make AMD Epyc CPUs the only choice for x86 based systems that have any workloads where the CPU needs to communicate with coherency to any Vega WX/Instinct based GPU SKUs.
Nvidia’s NVLink is only being used for limited Power9 based systems, and Power9 also is certified for the OpenCAPI(derived from IBM’s CAPI coherent protocol) usage/interfacing. AMD is a founding member of OpenCAPI Consortium along with IBM/Others so there will be AMD server CPU and professional GPU support for OpenCAPI coherent interface also for any Power9 based interfacing and AMD’s GPU accelerator products.
Epyc sales alone look to be more profitable for AMD than any GPU only markets, if you look to AMD’s past Opteron market share figures. And Epyc is beating Intel in integer workloads and AMD has its Vega GPUs for any HPC FP workloads where Epyc has a AVX disadvantage compared to Intel. AMD probably limited Epyc’s FP/AVX with power savings in mind for the majority of server workloads that do not need that AVX ability. AMD has its Vega GPU accelerator SKUs to offer up for any heavy FP number crunching workloads so that’s what will make Epyc systems usage plausible for the HPC market.
stop moving the goal post. in
stop moving the goal post. in my original post i was talking about HPC (supercomputer) not machine specific to machine learning/AI only. for this kind of machine massive FP64 performance is a must due to various kind of workload being run on it. so Vega 10 GPU will never be used in this kind of machine due to it’s limited FP64 performance. and the you said this:
“The Peak Double Precision Compute Performance of 819 GFLOPS(?) for Vega FE according to Reddit but Vega FE is not a Radeon Pro Branded WX SKU, so who knows about any professional compute products that AMD may have at the moment”
that’s why i point directly to M125 page pointing it FP64 performance. i know you want to say that M125 is Machine learning/AI so it’s FP64 might be being held back on purpose by AMD. sadly that’s not the case at all. tell me what benefit AMD will have by limiting the FP64 performance on M125? if they have massive FP64 performance then they can directly compete with GP100 not just in AI but true HPC machine as well.
you know what? you only living in your own world and just want to talk what you want. even if people try to guide you to the right direction you simply choose to be ignorant about it. get a life buddy.
No you are not much caring
No you are not much caring about Nvidia’s or AMD’s GPUs and the the goal post is the same, Nvidia does not have any server grade CPU IP in which to get Nvidia any package deal sales like AMD will be able offer. Any potential Epyc customer for AMD is a potential GPU customer and Nvidia can not tell Intel how to price Intel’s server SKUs. AMD can and will offer any of its Epyc customers Epyc/Radeon Pro/Radeon Instinct MI25/other GPU package pricing discounts, and even motherboard discounts. So any of AMD server OEM partners can offer their potential customers deals that Nvidia does not have the ability to match.
We already know that AMD/RTG professional WX/Instinct SKUs will be lower priced than Nvidia’s Quadro SKUs and AMD having the full pricing control of it’s Epyc server SKUs that are much more affordable that any of Intel’s offerings is going to get Radeon WX/Instinct into a lot of Epyc server systems over the next few years.
It’s going to be that Total Cost of Ownership that gets plenty of Epyc paired with Radeon WX/Instinct GPUs sales for AMD. CPUs, GPU, and the Motherboards in the server/workstation/HPC systems via AMD’s pricing latitude trifecta that Nvidia can not match. Those Teraflops/dollar and TeraFlops/watt figures will be looked over and Professional grade GPUs are underclocked/undervolted to reach the optimum Tflops/Watt metrics, so overclocking ability is not a factor in the professional markets.
GamersNexus has done its first round of Radeon Vega FE(air cooled) undervolting/underclocking testing and hopefully they will also do some of the professionla workloads testing for the Radeon Vega FE, and the FE was already winning many professional workloads benchmarks relative to the competition.
And did you find any of the Nvidia Quadro DP FP figures, they are hard to find in any comprehensive tabular format online.
I assume Wikipedia spends (at
I assume Wikipedia spends (at least part of) their donations on the costs of running the servers. Unless I’m misunderstanding you, I can’t think of why it would be Wikipedia’s job to collect and publish GPU performance numbers. Anyone (you or anybody) could do that.
Pages about computer hardware do sometimes lag behind, I agree- sometimes still stating that a certain CPU/GPU/etc. is the performance leader in some area when that hasn’t been the case for years- or they’ll lack more up-to-date info on newer products. Perhaps Wikipedia pays some employees to keep sections up to date, but that’s purely speculation on my part.
They will say “wait for
They will say “wait for drivers!”.
You must be some kind of
You must be some kind of stupid buddy. I have no bias for either side, but for you to be such a little fuck to waste your time commenting on this just to be negative, is utterly retarded.
Your mother should’ve raised a more intelligent, respectful, and less wasteful individual and you should be ashamed of yourself.
If you knew you weren’t even going to give the product a chance then why click on this page?
Check yourself in the future.
Someone found some cookies in
Someone found some cookies in their cornflakes this morning, didn’t they? The previous comment is not only harmless, its quite funny regarding, not only AMD, but Game developers too. No Man’s Sky, Mass Effect Andromeda, as well as some more recent BIG TITLE games, are having issues with drivers and hardware compatibility. Also, Is your name irony?
Um, I think the expression is
Um, I think the expression is “did someone piss in your cornflakes”. I tried to find a previous use of the expression “cookies in [one’s] cornflakes” on Google without success.
Unless you’re trying to eat healthier, I don’t see the harm in a cookie or two turning up there
Um, I think the expression is
Um, I think the expression is “did someone piss in your cornflakes”. I tried to find a previous use of the expression “cookies in [one’s] cornflakes” on Google without success.
Unless you’re trying to eat healthier, I don’t see the harm in a cookie or two turning up there
Lol, one of these types.
Lol, one of these types.
You’re the one with the
You’re the one with the issues, dude. Look at what he said, and look at your response, seriously. Take out whatever giant object is firmly lodged in your ass.
Wow projection much?
Wow projection much?
Probably shouldn’t let that
Probably shouldn’t let that remark get so under your skin. It’s a common fanboy argument. Whether gaming-centric software improves Vega’s gaming performance or not remains to be seen. The comment that bothered you seems to imply it won’t, but there’s no evidence to support it, either.
I’m disappointed to see ANOTHER reputable PC hardware site focus on gaming performance and not what the card’s actually meant for. Everyone would be going “WTF, mate?” if the RX Vega or a GTX card were being reviewed only with workstation or enterprise compute software.
Furthermore, all this focus on game framerates on a card NOT intended for gaming is giving people the impression that this is (guaranteed) the gaming performance to expect from the RX SKUs and that the software is somehow “broken” and RX Vega’s software has to “fix it”. In actuality, the only reason for concern would be if RX Vega significantly outperformed Frontier Edition and the drivers were not updated to improve its “Game Mode” retroactively.
Oh, to clarify: I don’t
Oh, to clarify: I don’t particularly care if one company is ahead or the other is- just that there’s competition to drive innovation, higher efficiency and lower prices.
Currently, I’m expecting performance beating the 1080 but probably not the 1080ti when the actual RX Vega gaming cards come out, with performance gradually improving with driver updates in the long-term. It seems pretty obvious that power efficiency is not as good as nVidia’s GTX cards right now, though. I’m not expecting that to change with different software.
I’m planning an mATX Ryzen 5 1600-based build this year (Mini-ITX if an AM4 ITX board comes out that I particularly like) and will be using whatever is the best card for my money in performance/$. The card I pick also can’t have an excessively loud cooler. Aside from that, whether the card has an AMD or an nVidia GPU doesn’t particularly concern me.
I would *like* for that best-bang-for-your-buck card I choose to support FreeSync, though, due to the exorbitant cost of G-Sync monitors. Using HDMI or DVI and VSync will leave performance on the table.
and no, I’m not expecting
and no, I’m not expecting nVidia GPUs to support FreeSync anytime soon, limiting my choices to AMD. But I’ll take DVI/HDMI and Vsync and spend a little more on, say, a 1060 instead of a 1050/ti if the performance just isn’t there on AMD’s side. That way I’ll have some extra FPS for Vsync to eat into without harming my gameplay experience.
There will be hopefully some
There will be hopefully some time for more Vega FE testing on games once the RTM Radeon RX Vega gaming drivers are released, But Vega FE is not for gaming but for development/games and other software development(FE is Not to shabby on non gaming compute and non gaming graphics workloads). And as usual with AMD(At least until the Epyc revenues start rolling in) it will take some time for the drivers and gaming engine/gaming software ecosystem to get fully optimized for the new Vega GPU micro-arch.
The Green Meanies can laugh all they want, but AMD has done a great job with its New Zen CPU Micro-Arch and Vega FE does great on those non gaming workloads. If RX Vega is between the GTX 1080 and GTX 1080 ti at release then AMD/RTG did their job and more than kept their promise for a Flagship offering. And AMD created Zen and Vega with a fraction of the R&D funding that is/was available to Intel/Nvidia.
AMD’s Ryzen SKUs are selling well, and AMD’s Polaris RX 470/570 and RX 480/580 SKUs are consistently sold out, for whatever gaming/mining reasons, and those GPU/CPU sales are still producing the revenues for AMD/RTG to be back big time, For Real, this time around.
“Pro cards are bad at gaming”
“Pro cards are bad at gaming” seems to be a popular myth, but not one that has been true for many GPU generations. At best, you could claim that pro GPUs are clocked lower than consumer ones, but this is not the case with Vega FE (as it clocks as hard as it can within its thermal and power envelopes). Nor is it hobbled by ECC, as the Vega FE does not support in-die ECC, nor does it have any application certification (unlike a FirePro or Quadro).
I would be willing to bet that, clock for clock, RX Vega will perform the same as Vega VE does. I would also bet that RX Vega is not going to magically achieve higher clocks than Vega FE for the same power.
Pro cards are bad at gaming,
Pro cards are bad at gaming, relatively speaking, compared to gaming cards if that FPS metric is your only goal. And Pro Graphics SKUs are not tuned for any FPS usage first and formost like gaming cards are. Gaming Cards can get buy with some quick and dirty math Libraries that are unacceptable to use for any pro workloads where accuracy is of prime importance, ditto for the graphics fidelity and error free rendering needed for the Pro Graphics workloads, or even compute workloads accelerated on the GPU.
You will have to pair that Professional GPU card with an Actual workstation grade/pro CPU and motherboard to get that full level of ECC/driver certified support for any Real workstation platform. So Gaming GPUs are better for gaming and Professional GPU are better for professional error free usage. And some issues of public safty come into play if the workstation platform is used for any structural engineerging workloads/etc. where things need to be as error free as one has the Liability Insurence to cover. And any errors that may lead to lawsuits/criminal penalties need to be kept to a minimum.
This is not true. Linus did a
This is not true. Linus did a video on this comping a quadro to a equivalent gaming card. They preformed exactly the same in games. With a few exceptions it is the exact same silicon. There may be differences in binning or tdp but when compared apples to apples they are exactly the same in games.
Quadro m6000 vs titan xp
https://www.youtube.com/watch?v=LC_sx6A5Wko
The Titan XP is not a gaming
The Titan XP is not a gaming only focused SKU, it’s in the same usage/marketing class as the Radeon Pro FE and so its not marketed for pure gaming only workloads. The Titan XP has 3840:240:96/Shader Processors : Texture mapping units : Render Output Units and the GTX 1080 TI 3584:224:88/Shader Processors : Texture mapping units : Render Output Units. The Quadro M6000 is based on the Maxwell GPU micro-arch so that’s not the same silicon as the Titan XP and the Quadro M6000’s come with 12 and 24 GB video memory options and it has 3072 Cuda Cores.
The Quadro P6000 is the SKU with the 3840 Cuda cores, but thanks to Wikipedia’s crappy non standard way of listing the Quadro specs there is no TMU/ROP figure for the Quadro SKUs. And looking at Linus video it is in fact a Maxwell SKU that supposed to be giving a Titan XP a run for the money! So what is up with that, and it’s hard to Know without those Quadro TMU/ROP numbers for both the M6000 and the P6000.
If you are going to list a Video card comparsion in the future at leat list the two card’s Shader Processors, Texture mapping units, Render Output Units, vital Numbers and the proper SKU IDs, because that ROP count to TMU/Shader processor counts can be used to guage the GPU’s likely FPS advantages. And what’s the Version number of graphics API, DX## used for the benchmarks or even the DX## feature level support Pro versus Semi-Pro Nvidia SKU.
AMD’s Vega FE is most certianly not going to be able to fling the frames out there without the ROPs in which to do that, but there is plenty of extra compute in Vega for those better professional workloads that may just produce the some of revenues(Epyc revenues will help AMD more) to allow AMD to afford to engineer a gaming only focused GPU design/designs.
Nvidia’s is always going to be able to get higher FPS than AMD because look that the ROP ratios on the Nvidia SKUs. Look at the number of ROP’s On the Vega FE(64) so that matches the GTX 1080’s 64 ROPs while the 1080 TI has 88 ROPs. The Radeon Pro FE is heavy on the compute/shader at 4096 shader cores compared to Nvidia’s lesser compute resources. AMD needs to get a gaming focused SKU that gives up some compute/shaders and adds in a better ROP count but that will have to wait until AMD has the revenues to engineer a gaming only focused GPU series.
The miners love AMD’s Polaris GPUs for that extra compute reason alone, miners do not care about gaming, but AMD has to Focus Vega’s extra Compute on the professional market where the revenues are to be made. look at where Nvidia gets the Revenues for R&D it’s in that professional market. The Vega FE is doing good just getting the FPS its currently getting with the number of ROPs that it has. The Vega FE has way more compute/shader counts than Nvidia on the consumer/prosumer level. Unless AMD can somehow turn those primitive shaders into programmed ROPs then AMD’s RX Vega is bound to land someware between the GTX 1080 and GTX 1080 TI on most DX11 benchmarks. With DX12 gaming still in the first stages of Gaming/Games maker adoption there is going to be an extended period of time required to work up any Vega micro-arch based SKU’s software/driver ecosystem support.
If you can afford that $5000+ Quadro maybe that M6000 is a good deal used for your gaming needs, and the P6000 is probably even more costly. I do think that comparing a Pro SKUs to a Non-Pro or semi-pro SKU is maybe not so apples to apples looking at the M6000’s 12GB and 24GB VRAM options.
There are so many New IP features in AMD’s Vega GPU Micro-Arch that it’s going to take some extra months of testing while AMD gets its Drivers up to par on Vega, Vega FE, RX Vega, and any Radeon Pro WX(Formally FirePro branded real Pro SKUs). If that Quadro M6000 is beating a Titan XP then maybe there is a difference in the Maxwell(GM200/GM204) compute ability in addition to the extra VRAM on the Quadro SKU, but there are two many unknowns to prove your point and Linus needs to list the full testing specs in a readable format.
Linus has long been know to
Linus has long been know to be a dumbass in regards to computing, he’s good at making video’s (actually his staff) and that’s about it really.
did you actually watch the
did you actually watch the video? all that long comment is useless if you did not watch the actual video first. the comparison is not with titan XP but with maxwell titan X. both Quadro M6000 and Titan X have similar spec except the amount of VRAM and the Quadro capability to support ECC. and in games both card (crysis 3) produce the same FPS.
“look at where Nvidia gets the Revenues for R&D it’s in that professional market.”
nope. nvidia majority of revenue still coming from their gaming business. their professional revenue indeed increasing in the last few quarters but the major contributor (over 50%) still coming from gaming segment alone.
http://www.anandtech.com/show/11361/nvidia-announces-earnings-for-q1-fy-2018
No, and it’s up to you to
No, and it’s up to you to back up your statment with readable Text/HTML article based refrence’s, Videos are TL:DW, and it looks like Nvidia’s gaming revenues as a percentage of their total business is on a relative downword trend(But the PC market is in decline) relative to Nvidia’s non gaming revenues which are increasing, and JHH sure spends a lot of on stage time talking about other markets other than gaming.
I never said that Nvidia’s gaming revenues where not relevent it’s Nvidia’s professional market focused earnings growth and research that is giving Nvidia more billions in revenues to justify the investment in specilized professional GPUs and gaming only tuned GPU SKUs that have less compute and use less power.
Look at the trends for Nvidia’s professional market sales and look at the wild swings from the gaming market revenues relative to Nvidia’s Professional market revenues. JHH is not dumb, he is looking towards Nvidia’s future and that’s a furure not dependent of the fickle gaming only market.
For Nvidia that’s a +186% year to year data center revenue improvement and +50.5% improvement for automotive for the same Y/Y revenue increases. Gaming market revenue swings and the PC market’s decline is not good news for both AMD or Nvidia for their respective gaming earnings growth over the long term.
AMD has so little revenues in the professional markets currently. But the Epyc CPU sales revenues will probably surpass any of AMD’s GPU revenues in short order and give AMD the funds to give to/invest in RTG that will allow RTG to design some gaming only focused GPU SKUs with those HIGH ROP/TMU numbers and less compute to save on power. AMD will also have to continue with the Radeon WX/Radeon radeon instinct focused investments to get more of that professional GPU market’s additional revenue stream above any gaming market revenues also. Gaming only will not keep anyone in business.
“Pro cards are bad at gaming,
“Pro cards are bad at gaming, relatively speaking, compared to gaming cards if that FPS metric is your only goal. And Pro Graphics SKUs are not tuned for any FPS usage first and formost like gaming cards are. ”
There is no magical hardware tuning, at the same clocks on the same die, the performance will be the same. Even in cases where drivers are used to enforce more conservative code paths, merely switching drivers is sufficient to eliminate this disparity.
“You will have to pair that Professional GPU card with an Actual workstation grade/pro CPU and motherboard to get that full level of ECC/driver certified support for any Real workstation platform. So Gaming GPUs are better for gaming and Professional GPU are better for professional error free usage. ”
This is not the case with the Vega FE, as it lacks both driver certification AND ECC. No in-cache ECC, no in-memory ECC.
I was talking about Pro GPUs
I was talking about Pro GPUs having to be paired with pro CPU/MBs for a fully certified Real Professional grade system and not the Vega FE, so why is your statment off target.
There will not, I Repeat, NOT be any structural engineering firms using the Vega FE or the TITAN X/whatever. So what are you on about. I think that you need to re-read the post that you have replied to, as you have not understood.
“There is no magical hardware tuning, at the same clocks on the same die, the performance will be the same. Even in cases where drivers are used to enforce more conservative code paths, merely switching drivers is sufficient to eliminate this disparity. ” (Not Sure what GPU SKU you are talking about I was talking about the Real Pro GPUs)
You really have very little idea how ECC works in hardware and how the chipset drivers and the memory protocols manage ECC in the firmware of both GPUs and CPUs have ECC turned on/ECC abilities. And all that is done in a transparent method to any user space gaming code. Error correction in the hardware takes extra time and adds latency on any type of processor.
The RX Vega is going to be[at launch/introduction] between the GTX 1080 and the GTX 1080 TI, and AMD will have its Flagship SKU. As usual RX Vega will have to go through the nominal AMD level of driver revisions after release to get any higher levels of gaming performance[That Fine Wine thingy that AMD does because of its relatively underfunded driver teams] and the Epyc Revenues will help with that problem the next time around.
I’m no expert in the area,
I’m no expert in the area, but I think a common reason for a Pro graphics card to underperform in games is that its drivers are optimized for image accuracy in things like heavy rendering jobs where you want an exact and clear image, not a less-accurate one taking shortcuts to churn out 60+ rendered images back-to-back in a second (like a game situation.)
I’d expect any performance improvements in RX Vega’s drivers to be retrofitted into the “Game Mode” of FE down the road.
I’m guessing drivers for cards like Quadro/FirePro are more mature, since I agree they perform fine in games.
I wonder if FE is doing any geometry culling- which is crucial to a decent, sometimes even playable, framerate in games. I mainly work with older game engines, but in my development experience, no culling can tank framerates.
It’s the exact same specs.
It’s the exact same specs. Why would it perform drastically different?
I think Vega will be the same
I think Vega will be the same as R9 Fury series was. Nobody will buy this, is just a waste of money and time for AMD.
LOL…Nvidia Fanboys dont
LOL…Nvidia Fanboys dont even know what they are saying. Actually the Vega FE card is not a gaming card, you all that and still annoying like a child. Feeling the heat of Vega Gaming card which is yet to be released?
I think we will all feel the
I think we will all feel the heat. Vega family are hot hot hot power hungry dissapointments.
ooohh I see, it is not
ooohh I see, it is not a”gaming card” so they left out the flux condensator that would somehow turn the card into a “gaming card” silly me! Ill bet my house, wife and kids that Vega will dissapoint all but the most hardcore fanboys
Wow you bet hardcore. Well, I
Wow you bet hardcore. Well, I use both AMD and nvidia, currently an nvidia card, so I don’t think you can consider me an AMD fanboy. Let your family know I might be moving in. BTW any of your family members play path of exile?
It doesn’t have the crippling
It doesn’t have the crippling 4GB-only HBM the Furies had – so if priced right (with this perf/watt) and if the custom air coolers can handle it well enough, it will find enough buyers who don’t care about the power consumption that much.
Still, AMD is in a tough spot if this is the final performance of the chip and there are no drivers coming up to rectify that. Then the hardware will be just too expensive to sell at a competitive price.
The Polaris/Vega(Radeon Pro
The Polaris/Vega(Radeon Pro WX and Radeon Instinct MI25) SKU sales and corrsponding Zen/Epyc sales for the professional markets will obviate the need for AMD to rely on Consumer/Gaming revenues alone for both GPUs and CPUs. So AMD may have some pricing latitude to price its consumer Vega SKUs at very close to break even pricing to gain/maintain Flagship GPU marketshare. AMD’s Polaris based RX 470/480 and RX 570/580 SKUs are sold out, for whatever gaming/mining reasons, so AMD’s GPU revenues are not hurting at the moment for lack of sales.
The Epyc sales are just beginning and the indications are that Epyc’s revenues will get AMD back into the server/workstation/hpc market in a better way than even AMD’s Opteron SKUs did/could at above even Opteron’s best market metrics way back in those days.
Has anyone ever actually been
Has anyone ever actually been crippled by Fury’s 4GB of HBM? Because I sure haven’t.
Fury X owner here. Yes
Fury X owner here. Yes unfortunately when I moved to 1440p you cant run a lot of newer games at max settings with only 4GB.
Hi! My name is “Nobody!” I
Hi! My name is “Nobody!” I had a 280X 3GB and loved it. It was the best choice for my money at the time I built my i5 Devil’s Canyon machine.
Sweeping generalizations ahoy in here
Radeon Vega Murloc Edition
Radeon Vega Murloc Edition *gurgle*
My Corsair AIO CPU cooler
My Corsair AIO CPU cooler sounded a little like that, but you could barely hear it at all under the (also quiet) GPU and 140mm case fans. I’ll take hearing the liquid move around in there over the shrill whiny noise of small-diameter blower fans.
“While I know that testing
“While I know that testing the Frontier Edition in ONLY gaming is a bit of a faux pas, much of our interest was in using this product to predict what AMD is going bring to gamers later this month. It is apparent now that if the clocks are in the 1600 MHz range consistently, rather than dipping into 1400 MHz and below states as we found with the air cooler at stock, Vega in its current state can be competitive with the GeForce GTX 1080. That’s an upgrade over where it stood before – much closer to GTX 1070 performance.”
Raja said that this was not a gaming card, and it’s not. So now on to some more testing with compute/non gaming graphics rendering/video workloads testing etc. But now that you have these cards it would be nice to do the other non gaming workloads testing.
That Includes some PCI pass-through testing with Ryzen/Threadripper(Better yet Epyc also) and KVM, with some virtualized GPU testing to several different running VM/OS instances all sharing one virtualized physical GPU subdivided into logical GPUs(Nvidia and Intel mostly restrict this to their Pro SKUs only). Also some dual and triple Vega FE testing(You have 3 Vega FEs to play with) on maybe Threadripper(Not the best solution) but maybe that Epyc 7401P 24 core($1075), I’m seriously looking at that 7401P SKU, single socket workstation SKU and its 128 lanes of PCIe goodness. Maybe some Radeon Pro WX testing also, if AMD can provide any samples.
Also for any folks that may do Vega FE compute/non gaming 3D rendering(Animation) workloads can you try some undervolting and/or under clocking Vega FE while testing for power usage? If it can be done while not adversly affecting performance then Vega FE will be useful for some hours long 3D animation rendering workloads.
$1075 is not so bad for 24 Epyc 1P server/workstation cores and you already have the semi-pro Vega FE for some video and other workload testing. How hard will it be to get any Epyc motherboard samples from the motherboard makers or even some Epyc CPU samples for testing?
P.S. if RX Vega can get to GTX 1080 levels of performance then at least AMD will have its flagship SKU. But until AMD gets the revenue stream(Most likely from Epyc sales) then AMD/RTG will not have the funds to produce a gaming only focused GPU SKU with loads of ROPs/less compute designed to spit out the FPS like Nvidia is able to produce with its massive revenue stream.
AMD, and its RTG, only had the funds to create that Modular Zeppelin Server Die(Used accross its Ryzen/Threadripper/Epyc) platforms and only enough funds to try and produce a GPU micro-arch that would be great for professional/compute/AI workloads while at least getting AMD a “Flagship” Gaming SKU to go along with its current Polaris based mainstream gaming SKUs.
Navi will be RTG’s first all Raja managed design, as the other previous GCN designs were started before RTG was created. Navi(modular GPU die/s) is going to give AMD the ability to create some offerings/introducions(within a few months time) for a full range of Navi Micro-arch based GPU’s: Low end, mainstream, and Flagship, in a similar manner to what AMD has offered with Zen/Zeppelin Die based Ryzen/Threadripper/Epyc SKUs that cover a complete product range Desktop to server/HPC. No more mainstream GPU designs one year and flagship GPU designs the next, because with Navi’s modular design AMD will have a scalable/modular and smaller GPU die and great wafer/die yields to create GPUs for all of it’s gaming markets, ditto for a professional Navi Modular die based designs for the compute/AI markets. Going Modular/Scalable on that Infinity Fabric worked out great for AMD’s Zen mico-arch based CPU offerings.
Neat. But here’s the big
Neat. But here’s the big question, how well does it do for visualization in Solidworks?
That’s exactly what I was
That’s exactly what I was thinking.
“Where are all the professional software suites? Where’s Adobe, where’s AutoDesk, AutoCAD, SolidWorks? Where’s Blender, 3DMax, Maya? Where’s all the ray tracing and image detection suites?”
This review pitches the equivalents of Quadro cards at a host of games. Not that there’s anything wrong with it, but the actual usage of this variant of Vega is oriented towards professional usage, so where’s the professional software benches?
Vega FE is not comparable to
Vega FE is not comparable to Quadros or FirePros, as it lacks application certification.
Vega FE will work with the
Vega FE will work with the professional software just fine for most usage. So Blender and the other software will work mostly. If you are doing production work on a tight deadline then you may want the Radeon Pro WX(Formally branded FirePro) cards with the full hardware/driver certification and such. But you are also going to need the fully certified Epyc workstation SKUs also for any more error free production where errors can cost you a job/further business if that job is not done by the deadline because of production errors. Ditto for any Nvidia/Intel professional workloads where if it’s not done correctly and on time that’s someone’s A$$.
So for learning and development work the Radeon Vega FE is about the same as the Radeon Pro Duo in its workloads usage metrics, with the Vega FE performing better on some workloads than the Radeon Pro Duo.
^This guy knows his stuff.
^This guy knows his stuff.
Yeah too much like maybe
Yeah too much like maybe Clementoz is a paid employee of AMD.
AMD must not have any magical drivers for RX Vega either because it is failing to impress in Budapest and Portland so far against the same most likely stock 1080. Blind test no framerates in BF1. Is this some idea of a joke.
If RX was better they would be killing the media with “leaks” by now showing it’s shining performance.
Yeah but let’s be real, 90%
Yeah but let’s be real, 90% of people looking forward to Vega could not care less and have probably never heard of most of those programs. Most people are only here in hope that it’ll shed some light on the gaming performance of the RX.
Though the absence is somewhat perplexing based on the fact the air cooled version had been tested in them by many reviewers.
^This.
^This.
EXACTLY.
EXACTLY.
“the diaphragm pump design
“the diaphragm pump design that makes this card interesting. Think of this is as a flexible reservoir with a high-tension spring to create pressure back into the system. A diaphragm pump works with one-way check valves and reciprocating diaphragm material to create alternating areas of pressure and vacuum.
…. maintain reliable usage of the cooler through the course of natural evaporation of fluid. This is very similar the kinds of pumps used in fish tanks and artificial hearts, likely a more expensive solution than was found on the Radeon Pro Duo or Fury X”
….
“The noise made by this pump is very different than traditional AIO coolers we have used in the office, more of a “gurgle” than any kind of “whine”.”
Sounds like a/ – they are doing something new and better w/ LIQUID COOLING, & b/ quiet, is something many would pay a premium for.
I wouldn’t consider it
I wouldn't consider it groundbreaking. I still found it louder than other coolers, but the high pitch whine was replaced with a ~60Hz diaphragm type pump sound. It's just a different type of pump. Time will tell if it is more reliable, etc.
Doubles as a nice space
Doubles as a nice space heater for the winter, maybe by Christmas AMD can hope to make some sales. I can just picture it now, AMD’s marketing going: “Game like a pro while keepin’ it cozzy for Yuletide!”
So all of those performance
So all of those performance graphs show the OC’ed Water Cooled version? Why not test it @1600MHz stock?
The overclocked results are
The overclocked results are on the overclocked results page.
Have you been to the
Have you been to the overclock page? There are no results there, just frequency scaling.
I see that almost all charts, except some of GTA V, show WC 350W 1712 Mhz (!!!). Is that a mistake? Is the card at stock 350W BIOS with stock 1600 Mhz or it’s real OC at 1712 for most of the tests?
Are the tables at the end of every test for 300W stock clocks, 350W stock clocks or 1712 Mhz OC?
There was a typo in the
There was a typo in the charts. Configuration for the non-OC tests was with the switch in the 350W position.
Ok. So I see you haven’t
Ok. So I see you haven’t updated the charts yet or even put a big red disclaimer at the beginning of the review… Good job.
I think that’s hurting your readers because you have no OC benchmarks posted so people assume what they’re seeing in the charts are real OC tests, thus concluding OC-ing over stock 300W, from 1382/1440 to 1712 Mhh is useless.
Oh man, the Threadripper
Oh man, the Threadripper deals just keep getting sweeter from AMD. AMD’s got them Zeppelin Die/Wafer yields up into the stratosphere and out into space and far to the stars.
Now if Threadripper is rated for 180 watts(TDP) cooling solution then Threadripper should overclock nicely with the included water cooling solution.
“AMD to Include AIO Liquid Coolers with Ryzen Threadripper Processors”
Go to the techpowerup website(Article), the S P and the A M filter is mad about other websites refrences lately!
Does this mean you have 2 air
Does this mean you have 2 air cooled and 1 water cooled variant on hand? If so, you should plug them into a system with a i7-7900x and see if you can dim the lights.
One-thousand internet points
One-thousand internet points for you, sir.
So this is a pro card and as
So this is a pro card and as you admit it is a faux pas to bench it for gaming. Seeing that it is designed for pro use and Nvida pro solutions range from 3k to 5k with less performance, how can you possibly take issue with the $1500 price tag???
This is a pro card. It is the best performing pro in price/value and performance in the market today. Period. It is a run away win in for it purpose. It is disappointing to me that game geek reviewers can’t evaluate this card for what is instead of they want it to be.
It’s a prosumer card. It’s
It’s a prosumer card. It’s not a pro card. Think of this as a less potent TITAN X with a liquid cooled design, as opposed to those expensive professional cards you refer to. It’s not the same thing, the drivers aren’t certified for pro apps (which is what you’re actually paying for) they tend to have single slot designs for at least some of the pro cards, etc.
AMD has stated that the
AMD has stated that the driver code is the same when used for gaming.
This is the same GPU that will be present in the gaming part.
Not hard to connect the dots. Sure there will be some further optimizations, but don't expect a miracle, regardless of how many times you've been promised one in the past.
They also said that this
They also said that this should not be compared to the RX version and the RX version will be a faster gamer. If it is the same hw it seems the difference is drivers. Obviously gaming drivers are not ready hence this is not a good example of real gaming marks. Maybe when the game optimized drivers are out this will get updated. Drivers matter.
Well, it seems to be the same
Well, it seems to be the same faux pas all the PC hardware press is making. It’s basically just throwing raw meat to the feuding packs of fanboy gamers the card’s not even intended for. Disappointing.
Is that a reservoir at the
Is that a reservoir at the back of card with 1 tube going to it?
In part, but read the
In part, but read the article.
A beefy air cooler should be
A beefy air cooler should be able to compete with the liquid cooler at 1600 MHz (thinking a 2.5 slot design like we see on many 1080 Tis.)
If the RX Vega product isn’t much more than this, or is identical to this silicon with similar driver performance, the board partners need to come in around $479 for their custom designs for this to make any significant impact on GTX 1080 sales at current market prices.
Ideally it would be a $400 GPU, but that’s probably not going to happen.
I see that almost all charts,
I see that almost all charts, except some of GTA V, show WC 350W 1712 Mhz. Is that a mistake? Is the card at stock 350W BIOS with stock 1600 Mhz or it’s real OC at 1712 for most of the tests?
Are the tables at the end of every test for 300W stock clocks, 350W stock clocks or 1712 Mhz OC?
But is it really a membrane
But is it really a membrane pump, though?
GamersNexus’s teardown showed only one fluid tube connecting the water jackets with the tank on the right.
This would imply then, that the water pump is in fact on the main water block, like they thought, and that the tank on the right is simply a membrane tank to provide constant pressure, like the expansion tank atop your water heater.
This would make sense, as it would allow for constant pressure even as some water permeates through the tubing and evaporates.
It has a hum similar to a
It has a hum similar to a fish tank aerator pump. The pump is over the block and the additional tank is as you describe. You need an expansion volume for this type of pump to operate efficiently in a closed loop. I don't suspect leakage is that much of an issue on modern closed loop coolers, so the primary purpose would be maintaining pressure during the suction stroke of the pump.
Makes sense.
Thanks for the
Makes sense.
Thanks for the reply!
They (GN) didn’t do a
They (GN) didn’t do a “teardown,” they commented on pictures someone sent to them from the Internet.
That’s a fair point.
That’s a fair point.
1382mhz to 1712mhz and we see
1382mhz to 1712mhz and we see a ~10% bump in results ?
And sometime not even 4% ….
If you bump your GPU 25% but only see 4% improvement.
-the GPU is bandwidth limited
-the drivers are limiting the card
Then on the same note. Could we reduce the clock rate by 25% and only loose ~10% performance ? maybe not even 4% ?
This could drop power usage to well under 200w . It seem possible AMD could make a decent 1070 competitor then ?
nVidia’s GPUs don’t seem to
nVidia’s GPUs don’t seem to scale linearly with clock speed anymore, either.
Ryan,
I just want to say that
Ryan,
I just want to say that I think the graphs are a little hard to read. It might be a good time to update they way you display this information. Thank you for providing great coverage of new technology.
Too bad nvidia hasn’t taken
Too bad nvidia hasn’t taken the AIO route like AMD for their GPU especially when heat plays big role with boost.
After reading this review of
After reading this review of the WC Vega FE. I am fairly sure AMD is hitting a wall with their GPU’s. The fact that AMD is slapping a AIO water cooler on their “pro-sumer” card, tells me their ability to provide efficient cards is out the window. Granted, the air cooled version is a bit of a *fart in the wind* situation.
After reading many reviews of the Vega FE (both air and WC), AMD’s drivers are once again the main culprit of their fairly poor GPU performance in games. IMO, the review needed professional/computational benchmarks. Other than that, seeing AMD’s new high end GPU falling below a 1080 is a bit concerning. I have lost a lot of hope in AMD’s RX Vega line up of GPU’s. But, I’ll remain optimistic and ultimately hope AMD steps up to the plate with their upcoming gaming oriented GPU’s. Good review as always. Thanks for posting the reviews Mr. Shrout and take care (same goes with the rest of the PCper crew).