History and Specifications
AMD has launched what it calls the “world’s fastest graphics card”. Is it?
The Radeon Pro Duo had an interesting history. Originally shown as an unbranded, dual-GPU PCB during E3 2015, which took place last June, AMD touted it as the ultimate graphics card for both gamers and professionals. At that time, the company thought that an October launch was feasible, but that clearly didn’t work out. When pressed for information in the Oct/Nov timeframe, AMD said that they had delayed the product into Q2 2016 to better correlate with the launch of the VR systems from Oculus and HTC/Valve.
During a GDC press event in March, AMD finally unveiled the Radeon Pro Duo brand, but they were also walking back on the idea of the dual-Fiji beast being aimed at the gaming crowd, even partially. Instead, the company talked up the benefits for game developers and content creators, such as its 8192 stream processors for offline rendering, or even to aid game devs in the implementation and improvement of multi-GPU for upcoming games.
Anyone that pays attention to the graphics card market can see why AMD would make the positional shift with the Radeon Pro Duo. The Fiji architecture is on the way out, with Polaris due out in June by AMD’s own proclamation. At $1500, the Radeon Pro Duo will be a stark contrast to the prices of the Polaris GPUs this summer, and it is well above any NVIDIA-priced part in the GeForce line. And, though CrossFire has made drastic improvements over the last several years thanks to new testing techniques, the ecosystem for multi-GPU is going through a major shift with both DX12 and VR bearing down on it.
So yes, the Radeon Pro Duo has both RADEON and PRO right there in the name. What’s a respectable PC Perspective graphics reviewer supposed to do with a card like that if it finds its way into your office? Test it of course! I’ll take a look at a handful of recent games as well as a new feature that AMD has integrated with 3DS Max called FireRender to showcase some of the professional chops of the new card.
Radeon Pro Duo Details
The information provided here is an overview of the specifications and design of the card itself. If you read over our preview story already, there isn’t much new here other than a couple of photos we took in-house. If you are ready to jump right to the test results, feel free to do so!
The design of the card follows the same industrial design as the reference designs of the Radeon Fury X, and integrates a dual-pump cooler and external fan/radiator to keep both GPUs running cool.
The 8GB of HBM (high bandwidth memory) on the card is split between the two Fiji XT GPUs on the card, just like other multi-GPU options on the market. The 350 watts power draw mark is exceptionally high, exceeded only by AMD’s previous dual-GPU beast, the Radeon 295X2 that used 500+ watts and the NVIDIA GeForce GTX Titan Z that draws 375 watts!
Here is the specification breakdown of the Radeon Pro Duo. The card has 8192 total stream processors and 128 Compute Units, split evenly between the two GPUs. You are getting two full Fiji XT GPUs in this card, an impressive feat made possible in part by the use of High Bandwidth Memory and its smaller physical footprint.
Radeon Pro Duo | R9 Nano | R9 Fury | R9 Fury X | GTX 980 Ti | TITAN X | GTX 980 | R9 290X | |
---|---|---|---|---|---|---|---|---|
GPU | Fiji XT x 2 | Fiji XT | Fiji Pro | Fiji XT | GM200 | GM200 | GM204 | Hawaii XT |
GPU Cores | 8192 | 4096 | 3584 | 4096 | 2816 | 3072 | 2048 | 2816 |
Rated Clock | up to 1000 MHz | up to 1000 MHz | 1000 MHz | 1050 MHz | 1000 MHz | 1000 MHz | 1126 MHz | 1000 MHz |
Texture Units | 512 | 256 | 224 | 256 | 176 | 192 | 128 | 176 |
ROP Units | 128 | 64 | 64 | 64 | 96 | 96 | 64 | 64 |
Memory | 8GB (4GB x 2) | 4GB | 4GB | 4GB | 6GB | 12GB | 4GB | 4GB |
Memory Clock | 500 MHz | 500 MHz | 500 MHz | 500 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 5000 MHz |
Memory Interface | 4096-bit (HMB) x 2 | 4096-bit (HBM) | 4096-bit (HBM) | 4096-bit (HBM) | 384-bit | 384-bit | 256-bit | 512-bit |
Memory Bandwidth | 1024 GB/s | 512 GB/s | 512 GB/s | 512 GB/s | 336 GB/s | 336 GB/s | 224 GB/s | 320 GB/s |
TDP | 350 watts | 175 watts | 275 watts | 275 watts | 250 watts | 250 watts | 165 watts | 290 watts |
Peak Compute | 16.38 TFLOPS | 8.19 TFLOPS | 7.20 TFLOPS | 8.60 TFLOPS | 5.63 TFLOPS | 6.14 TFLOPS | 4.61 TFLOPS | 5.63 TFLOPS |
Transistor Count | 8.9B x 2 | 8.9B | 8.9B | 8.9B | 8.0B | 8.0B | 5.2B | 6.2B |
Process Tech | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm |
MSRP (current) | $1499 | $499 | $549 | $649 | $649 | $999 | $499 | $329 |
The Radeon Pro Duo has a rated clock speed of up to 1000 MHz. That’s the same clock speed as the R9 Fury and the rated “up to” frequency on the R9 Nano. It’s worth noting that we did see a handful of instances where the R9 Nano’s power limiting capability resulted in some extremely variable clock speeds in practice. AMD recently added a feature to its Crimson driver to disable power metering on the Nano, at the expense of more power draw, and I would assume the same option would work for the Pro Duo.
The rest of the specs are self-explanatory – they are double everything of a single Fiji GPU. The card will require three 8-pin power connectors, so you’ll want a beefy PSU to power it. In theory, the card COULD pull as much as 525 watts (150 watts from each 8-pin connector, 75 watts over the PCI Express bus).
AMD is definitely directing the Radeon Pro Duo towards professionals and creators, for several reasons. In terms of raw compute power, there isn’t a GPU on the market that will be able to match what the Pro Duo can do. For developers looking to have access to more GPU horsepower, the price of $1500 will be more than bearable and will give a pathway to really start diving into multi-GPU scaling integration for VR and DX12. AMD even calls out its FireRender technology, meant to help software developers integrate a rendering path for third-party applications.
But calling yourself out as the “world’s fastest graphics card” also means you are putting yourself squarely in the sights of PC gamers. At the Capsaicin event, AMD said the card was built for "creators that game and gamers that create." AMD claims the Radeon Pro Duo is 1.5x the performance of the GeForce GTX Titan X from NVIDIA and 1.3x the performance of its own Radeon R9 295X2.
Obviously the problem with the Radeon Pro Duo for gaming is that it depends on multi-GPU scaling for it reach its potential. The Titan X is a single GPU card and thus NVIDIA has much less trouble getting peak performance. AMD depends on CrossFire scaling to get peak performance (and the rated 16 TFLOPS) for any single game. For both NVIDIA and AMD, that can be a difficult process, and is a headache we have always discussed when looking at multi-GPU setups, whether they be on a single card or multiple.
The build of the Radeon Pro Duo is impressive. Much like the Fury X that was released last year, the RPD design is both sleek and classy, representing the gaming market better than any previous AMD reference products we have tested.
The Radeon Pro Duo is heavy though – so be careful if you are shipping a system with one installed. Mounting the water cooling radiator is a bit easier thanks to the extended tubing compared to the Fury X, which is a nice change. The red Radeon branding along the top of the card remains part of the design as well, and it helps you stand out if you are building the PC inside a windowed case.
Even better – there is NO noticeable pump noise or coil whine from the card! Unlike the Fury X sample we got on day one, where I cringe starting it up each and every time we have to do testing, the Radeon Pro Duo appears to have been fitted with higher quality pumps and electrical components.
The GTA V Frametime AMD
The GTA V Frametime AMD graphs look like a seismograph during the largest earthquake in history.
Any word on fan and pump
Any word on fan and pump noise and coil whine?
You didn’t read the article
You didn’t read the article did you its on the first page…
“Even better – there is NO noticeable pump noise or coil whine from the card! “
I have two 2016 manufactured
I have two 2016 manufactured Asus Fury X cards and the pumps are completely silent. I believe AMD has that problem completely straightened out.
Hi any reason that you only
Hi any reason that you only testing aging DX11 games? The situation on DX 12 games will be absolutely different and the AMD solution the clear winner. As everybody know the SLi GTX solution will fail there!
Not only that but where’s Far
Not only that but where’s Far Cry Primal, or The Division, newer games that AMD would, and have outperformed Nvidia. But yeah, not to mention Hitman or Ashes… These games smell of handpicked titles.
None of those games have
None of those games have crossfire profiles.
Wrong.
Wrong.
*sigh*
You guys just keep
*sigh*
You guys just keep finding your keyboards each morning, don't you?
Don’t give them any excuses
Don’t give them any excuses Ryan test them.
DGLee tested Fury vs 980 Ti
DGLee tested Fury vs 980 Ti vs Titanium X.
Wide range of games.
Up to quad cards.
Results contradict what this review, to say the least.
http://iyd.kr/753
GPU Journalism comes down to “choosing right games” I guess.
That site didn’t test the
That site didn’t test the same cards as PCPER, and specifically didn’t test the card this review is about. They didn’t even test a Nano which is the most similar card to this one. They also didn’t test frametimes at all which is probably the most important metric, especially when looking at SLI/CF stuff.
No one will argue that you can’t get some good FPS numbers in benchmarks with this card, but the gaming experience/smoothness is lacking and it’s too expensive. I know it’s crazy but some people use video cards to PLAY GAMES.
I actualy have two Fury Xs in
I actualy have two Fury Xs in CF for gaming and for example for GTA V all you need is to turn the Vsync on and then it is buttersmooth. Its the same story with other games.
Also if you want a very good
Also if you want a very good results with AMD GPUs, just force 8x tesselation thru driver and voila, you have much better results with no visible difference.
It’s crazy. It’s as if the
It’s crazy. It’s as if the same person with a personality disorder keeps talking to you.
You just never know who he/she is.
that’s messed up
that’s messed up
pretend anonymous is one
pretend anonymous is one person, arguing with themself. It makes it a beautiful thing.
no DX12? thats a shame. what
no DX12? thats a shame. what are you afraid of?
they are afraid of losing the
they are afraid of losing the nvidia money for showing amd in a bad light
A DX12 game might be
A DX12 game might be interesting but the thing is 99% of games are still DX11 so I’m not sure how representative it would be of gaming in general or even DX12 in general.
yeah something is up with
yeah something is up with these benches
amd is clearly the superior solution but they picked all nvidia biased gameworks games to show amd in a bad light
something fishy is going on
nobody even cares about dx11
nobody even cares about dx11 anymore
all the good games are dx12 now
the only reason they are testing nothing but dx11 GAMEWORKS games is because they make amd look bad
the cant say anything good about amd or they will lose the nvidia payoff money they get for making amd look bad
Can you name some of these
Can you name some of these games since “all the good games are DX12 now”? If you look at Steamcharts there is ONE game with DX12 support in the top100 most played games (Rise of the Tomb Raider at #94).
Let me guess Steamcharts is a giant Nvidia conspiracy too and they helped fake the moon landing.
Are you high? There is a
Are you high? There is a grand total of 10 games that support dx12 2 of which are early access, and one is a “remaster” of a 10 year old game. Did you forget what happened with dx11 when it came out? We are likely to see major titles come out on dx11 for years into the future just like with dx9 titles. Also idk about you but my games library has an awful lot of dx11 titles in it. Not to mention there is currently no way to frame rate dx12 games which is a pretty big fucking deal when your testing crossfire/sli. You’re also strait up deluded if you think nvidia or amd or intel is paying hardware reviewers off.
Does AMD support using each
Does AMD support using each card per eye for VP scenarios? Isn’t this what the card is mainly being marketed for in the pro market?
They will. Especially in
They will. Especially in DX12.
This literally makes no
This literally makes no sense. VR does not mean DX12, and DX12 multi-GPU is still difficult to do from a developer point of view.
Again….*sigh*
To add to this: VR multi-GPU
To add to this: VR multi-GPU is an even HARDER prospect than DX12 explicit multi-GPU. The optimisation pathway is different (latency-focused rather than throughput focused), and because the relative mix of parallelisable vs. non-parallelisable jobs changes when you seperate ‘eft eye’ and ‘right eye’ rendering, all the optimising you did to minimise latency for a single GPU pretty much has to be thrown out for multi-GPU.
Thus far, NOBODY has released any VR program that supports multi-GPU outside of the proof-of-concept tech demos put out by Nvidia and AMD themselves. Valve are the only ones who have even promised a game (The Lab) would implement it, but thus far has not.
is that really true? Because
is that really true? Because I think the way people look at it is like running separate monitors on separate GPUs rendering separate things. That itself should not be that hard. In fact it should be easier to do than typical multiGPU because in concept its not actually the same thing as multiGPU (rendering the same thing on the same screen with different GPUs). Issue might be synchronizing and would probably require identical GPUs or slow the whole thing down to the slower GPU.
Doesn’t SEEM more difficult.
You’re VASTLY underestimating
You’re VASTLY underestimating the difficulty of rendering for VR, and trying to do that across multiple GPUs just makes the problem harder.
You have an 11ms windows to complete all your rendering and start reading out the framebuffer. If you miss it, you’re SOL. Within that 11ms, you need to perform the game world simulation, hand that data over to the GPU, perform all geometry operations, perform all buffer operations (e.g. shadowing), copy all that output across the PCIe bus to the other GPU, render the actual on-screen pixels (the only really parallelisable part), merge the two buffers into one by passing data back over the PCIe bus again, warp the final composited buffer (along with performing a just-in-time orientation correction depending on SDK path), then output it.
It’s already somewhat of a hack job to do just-in-time warping as it is, due to GPUs not having a natively exposed function for “how long is it until scanout for the next VSYNC?” resulting in various tricks like late-latching to try and get data ready just in time without overshooting (missed frame = BAD) or undershooting (wasted GPU time, latency penalty). Trying to coordinate this on just one GPU is hard enough, trying to do it with two is even harder.
Valve implemented the cross
Valve implemented the cross fire for their Aperture Robot Repair demo which is created using Source2 engine. I haven’t tested The Lab using multi GPU setting yet, but I see no reason otherwise. And they’ve stated they practically doubled the framerate using AMD’s crossfire tech in their Advanced VR Rendering talk in last year’s GDC and encourage everybody to do so.
This SKU, and Others from
This SKU, and Others from both AMD and Nvidia will be old long before the Benchmarking and Games themselves have been optimized for DX12 and Vulkan and the VR gaming Hardware/Software/Driver stack. There will be no definitive body of evidence for a few more years, and maybe somebody will have the spare time to test this new and soon to be older generation GPU technology in a few years to settle the argument!
you shouldnt show contempt
you shouldnt show contempt for your audience. sigh.
This!
This!
amd is no longer you’re
amd is no longer you’re friend Ryan. in fact I suspect they hate your guts today….but we loves ya 😉
Did you use the professional
Did you use the professional drivers for the professional testing?
No, AMD said it wasn’t
No, AMD said it wasn't necessary.
The FireRender plug in just sees them as OpenCL addressable devices. No qualification needed or helping there.
It will be interesting to see
It will be interesting to see how RPD compares to a high end AMD and Nvidia pro card. They are much more expensive than $1500, which might mean the RPD is a great price for pro users. Many gamers don’t realize there is a whole other market for GPUs with prices that would make your eyes water.
Needs some DX12 stuff man,
Needs some DX12 stuff man, though I guess those are currently difficult to benchmark or whatever?
Yeah, the problem is we can’t
Yeah, the problem is we can't use our Frame Rating methods, that are pivotal to finding things like the issues in GTA V or The Witcher 3. Soon! 🙂
Issues in GameWorks titles?
Issues in GameWorks titles? NO!!!
Next thing your going to tell me is that water is wet.
How DARE you suggest
How DARE you suggest Gameworks might possibly create ANY issues in games just because of the overwhelming amount of games that run like garbage with it? Huh? REALLY?
Yeah… didn’t think so…
oh so you just assumed there
oh so you just assumed there would be issues, and didnt bother to test? a good article would have mentioned this… “i didnt want to test modern titles because…”. instead this comes off as a very biased review to anyone with the slightest clue Ryan.
I am sorry but the titles you
I am sorry but the titles you picked (and proudly presented) are all Nvidia biased titles from the start. It makes the review really really smelly.
at least you could have shown some other titles:
Quantum break (AMD biased)
AOTS (DX11,DX12 and good benchmarking)
Hitman (DX11, DX12 AMD biased)
The division
Oh yea, that rise of the tomb
Oh yea, that rise of the tomb raider is so nvidia biased…
It was ONLY the FLAGSHIP GAME of AMD’s version of Hairworks…
LOL…
Quantum Break is a clusterfuck right now and doesn’t work properly with dual gpus, same with the Division. They totally could have thrown in Hitman or the Division, but it wasn’t as if this was even supposed to be presented as a gaming card….
IT LOST IN ALL PROFESSIONAL BENCHMARKS…
In Professional work, the CPU
In Professional work, the CPU is also used when doing rendering. I understand that you are wanting to test the GPU here, but just like I know how important testing real world scenarios in games is to you, this is important to professionals. So a set with and without would have been great. It could also help pinpoint bandwidth issues arising from using one card vs two.
Great article otherwise, and thanks for the hard work.
The CPU will factor in less
The CPU will factor in less and less for rendering work as GPUs get more ACE type functionality, the Professional drivers graphics APIs and software packages are doing even more acceleration on the GPU for Ray Tracing rendering that used to require a CPU(Lots of CPUs with lots of cores at the costs of thousands/millions of dollars).
The Pro rendering is done in the frames per minute and longer/minutes times if AO/other settings and heavy ray tracing sampling is done on the graphics software on the GPU(Pro GPUs). The CPU is a piss poor tool for rendering or there would have never been a need for GPUs in the first place. Future GPU ACE/ACE type units may even get all of the CPU type branch/VM memory management functionality and do completely without any need for the CPU for any workloads, GPUs may even get embedded CPU functionality!
Watch out for AMD’s APUs on an interposer with future greater than HSA 1.0 functioanlity. Those APUs on an interposer will be so integrated with the CPUs that the CPU will be able to directly dispatch floating point/Integer/Other Instructions to the GPU. The inetrposer based APUs/SOCs will probably have the CPU and GPU sharing the same L1/L2/L3 I$ D$ cache memory subsystems with the CPU and GPU both having a wired up via the interposer CPU/GPU Cache subsystem such that instructions fetched into the CPUs instruction cache will automatically be forwared to the GPUs Cache and executed there if the CPUs FPUs are busy(If the CPU even has FP units, or SIMD units, of its own on these systems). I’m looking for more dedicated Ray Tracing functionality like the PowerVR(wizard) has for mobile GPUs only on AMD and Nvidia SKUs in the future when that catches on for the desktop GPU market gaming/pro graphics.
Edit: so integrated with the
Edit: so integrated with the CPUs
to: so integrated with the GPU functionality wise
Holy cow those
Holy cow those frametimes
Just as interesting is that SLI 980tis deliver much more consistent frametimes than a single fury x
amd have a VERY long way to go with their drivers
In these Nvidia centric games
In these Nvidia centric games sure.
that’s the problem with the review.
AMD drivers have actually come along way and are in many ways now even better then Nvidia’s. certainly the latest fiasco with the nvida drivers when people updated and the PC refused to boot up, even after they downgraded back to the other version they had.
Is this the totality of your
Is this the totality of your un-bias testing?
All these games were released with GameWorks enhancements. To be un-bias you could equally include the same amount of game favoring AMD and include both or better yet find games that aren’t “sponsored” by either.
Yeah, adding some games that
Yeah, adding some games that are optimized for AMD or optimized for neither seems more balanced than just a slew of gameworks titles. That said, I also agree that the pro duo isn’t really for gamers and will probably not sell that much.
Agreed. need to add more
Agreed. need to add more games for the Red team or go with out the sponsored ones
Is the Pro Duo’s limited 4gb
Is the Pro Duo’s limited 4gb per gpu to blame for it’s shortcomings compared the 980ti’s 6gb?
I normally do not post here,
I normally do not post here, but leaving out DX12 games, and putting in only Nvidia Gamework titles into your review seems very very biased.
Do not get me wrong, I think the Pro Duo is a waste of money, and not worth getting, but I mean if you want people to stop saying you are heavily biased all the damn time. Well…this review shows how biased you truly are.
Hopefully in the future you can change the view neutral people who read reviews have.
Maybe its time to change your frametime methods since DX12 will be the new standard.
Love how whenever amd
Love how whenever amd products lose in benchmarks which happens more often than not, the fanboys all come out whining about the benchmark selection lol
I think AMD saw the
I think AMD saw the limitations of what this card was for gaming and decided to offer it with a professional Driver option/repurpose for more professional/development uses. As for gaming, RTG had probably washed their hands of this SKU for gaming and moved on to Polaris and its positioning for the more affordable mainstream market position. The real money is being made on the lower priced SKUs, and the laptop/mobile SKUs market where AMD needs a better presence.
There will be time for more flagship fapping when Vega and Volta get here with HBM2 and a improved Driver/Gaming engine software stack and more games/VR games support, as well as games support/optimizations for DX12/Vulkan!
Watch out for those M$ 3Es, and attempts at corralling that gaming market into M$’s 30% of all the gaming handle action with its windows 10 and UWP shenanigans! Keep pluggng away Gabe we may very well need your efforts after all!
Exactly , this card is for
Exactly , this card is for devs and so on, not gamers. But the fanboys here have missed that point.
Hell yes for non pro(non Full
Hell yes for non pro(non Full Pro Hardware features) rendering with the pro drivers, I want One for Blender/other Rendering, as soon as Polaris hits that market and the deals begin on these and other SKUs. Even as much as this SKU costs, that availability of the Poefessional drivers, even if the hardware may not have the proper full pro SKU error correction makes these SKUs great for educational training or other uses. Hell a college or school could get these for the students for learning only, and only get one or two full firepro versions for students’ final projects, and grad students’ final projects that heve to be as error free as possable, and they would save loads of money.
It’s the pro certified drivers that make the Fire Pro/Quadro SKUs so costly, that and the years of hardware/software/driver support you get when you get the Full Pro versions of the $2000-$5000 versions of the cards with the full hardware error correction and such.
The only people that would need the Full costly(FirePro, Quadro) versions would be the post grad students and their professores doing actual reserach where the pro versions would be required for actual products/actual research funded by grants with strict requirments for safty, and peer review.
Great write-up. Thanks Ryan!
Great write-up. Thanks Ryan!
Yay you’ve finally updated
Yay you’ve finally updated your games list for benches. About time.
Not sure why AMD bothered
Not sure why AMD bothered really, they may sell perhaps 10 at most
Shame there’s no DX12 titles.
Shame there’s no DX12 titles. That seems like the direction games are headed, and it would be nice to know how such an expensive card would handle in the long-term.
Gears of War, Hitman, Quantum Break tend to do better for AMD w/ DX12.
Ryan isn’t going to test any
Ryan isn’t going to test any DX12 games until Nvidia gives him the tools to do so.
Correction:
Ryan isn’t going
Correction:
Ryan isn’t going to test any DX12 games until Nvidia wins in DX12 benchmarks.
“Nvidia gives him the
“Nvidia gives him the tools”
And thats the problem right there folks.
No the actual problem was
No the actual problem was that before we had FCAT screen capture / frame rating, CrossFire was a mess and AMD / ATI wouldn’t publicly admit anything was wrong. AMD fanboys were complaining that even though their benchmarks showed stellar performance, when they actually played a game with CrossFire, it didn’t feel stellar. Then nVidia came along with FCAT to show what was happening and lo and behold, look AMD CrossFire drivers produce runt frames and dropped frames (that never get displayed) but artificially lower frame times which influenced the overall reported framerate. With FCAT, these were taken out and the actual observed framerate looked a lot different than what was being reported before FCAT. Once this evidence seen the light of day, AMD had no choice but to improve their CrossFire drivers, and they have for the most part.
So you AMD punters might want to get down and kiss nVidia’s ass for helping “out” the problem, because without them, AMD would still be flaunting rubbish benchmark results and ignoring your pleading to fix a problem they weren’t going to ever admit to.
Let’s be honest, if the shoe had been on the other foot and nVidia’s SLI had been performing poorly, but the benchmarks said otherwise, AMD would have stepped in with a way to “out” nVidia. This has been going on for as long as these two companies have been competing. Be glad that it does, otherwise we’d be getting spoon fed crap and we’d have to eat it with a smile.
I for one don’t give a rat’s ass where the tools come from for performance testing, as long as they are made freely available and can be tested by impartial third parties to verify fairness. After they’ve been proven trustworthy, there shouldn’t be any beef over who developed it, just be glad we have it. If there had been a nVidia flavored spin on FCAT, I’m certain that either sites like PCPer or AMD themselves would have challenged it. Since that didn’t happen, and it’s become the standard for testing DX11 (and earlier) multi-GPU setups, there is no point in complaining where the tools came from now. Besides who better than nVidia and AMD to test something that only they know so intimately. I challenge that they understand the graphics API’s and how best to test performance in them (especially since they build the drivers) better than any other entity. So who better to develop these tools?
I’m sure PCPer and Guru3D and every other reputable review site out there wants a way to fairly test DX12 titles. Not every game is going to have the included benchmark that AoS has built in. Beyond that, AoS IS NOT going to be indicative of all DX12 performance across the board. It’s a massive RTS which focuses on enormous amounts of draw calls and definitely benefits from Async Compute. Not all titles are going to lean as heavily (or at all) on these particular advantages of DX12.
I really hope that AMD gets
I really hope that AMD gets more of the professional GPU/APU(on an Interposer) workstation/HPC market for GPU/GPU accelerators, then both AMD and Nvidia can become less dependent on the gaming market for revenues. In the HPC/Workstation and supercomputer markets there will be more income to support R&D for the long term improvement of the GPU for both compute and gaming. This will free both AMD and Nvidia from the fickle fanboy gaming market as a major source of GPU revenues and allow both AMD and Nvidia to move towards giving their GPU SKUs the very same functionality that the CPU provides while at the same time providing the massively parallel vector processing/other graphics/compute functionality that no CPU can match.
I look forward to GPUs getting some dedicated ray tracing functional blocks on the GPU and freeing the graphics users from any dependency on Intel’s high priced CPU server/workstation SKUs for graphics/workstation and yes gaming uses also! AMDs ACE units are acquiring more of that CPU type functionality with each new GCN generation, and Nvidia appears to be going that direction also with some ACE types of functionality of its own, Nvidia did hire AMD’s Phil Rogers(AMD’s HSA guru) so maybe by the time Volta is here that async compute on the GPU will be a non issue, except for the CPU fanatics of Intel’s overpriced CPU SKUs. Let’s get the CPU out of all graphics workloads uses, and relegate the CPU to running the OS and janitorial duties on the computing systems while the GPU does the graphics/gaming and serious number crunching!
Imagination technologies
Imagination technologies already have dedicated Raytrace hardware available. Also most peeps dont do there final render of scenes on the CPU due to the lack of Ray trace performance/hardware they do it due to the lack of memory as some scene in film CGI and 40gb+ which no GPU cards can hold but you can easily have that amount of system ram. I will try and dig up link but seems this is the case.
Looks like a niche card
Looks like a niche card that’s more aimed at developers. I feel like gamers would be better served by getting any 2 high-end cards like anything in the Fury series (including the Nano) or the 980 or 980Ti.
Personally, I don’t like SLI / Crossfire that much, and would rather go with just 1 card. That said, Polaris and Pascal are coming out soon, so I feel like it’s better to just wait if you want 1 super-card.
Even if you don’t mind the old stuff, it’ll be on sale at clearance prices when the new stuff hits. Good time to grab a Nano or a 970 or whatnot, I think.
nvidia shill?
all gameworks
nvidia shill?
all gameworks games and no VR and no dx12 with async compute
Hey Ryan, when Nvidia will
Hey Ryan, when Nvidia will give you the green light to test DX12?
when they ‘fixed’ async in
when they ‘fixed’ async in their drivers. 😉