A fury unlike any other…
It’s finally time to talk specifics here people – can the new AMD Fury X rival the performance of the GeForce GTX 980 Ti?
Officially unveiled by AMD during E3 last week, we are finally ready to show you our review of the brand new Radeon R9 Fury X graphics card. Very few times has a product launch meant more to a company, and to its industry, than the Fury X does this summer. AMD has been lagging behind in the highest-tiers of the graphics card market for a full generation. They were depending on the 2-year-old Hawaii GPU to hold its own against a continuous barrage of products from NVIDIA. The R9 290X, despite using more power, was able to keep up through the GTX 700-series days, but the release of NVIDIA's Maxwell architecture forced AMD to move the R9 200-series parts into the sub-$350 field. This is well below the selling prices of NVIDIA's top cards.
The AMD Fury X hopes to change that with a price tag of $650 and a host of new features and performance capabilities. It aims to once again put AMD's Radeon line in the same discussion with enthusiasts as the GeForce series.
The Fury X is built on the new AMD Fiji GPU, an evolutionary part based on AMD's GCN (Graphics Core Next) architecture. This design adds a lot of compute horsepower (4,096 stream processors) and it also is the first consumer product to integrate HBM (High Bandwidth Memory) support with a 4096-bit memory bus!
Of course the question is: what does this mean for you, the gamer? Is it time to start making a place in your PC for the Fury X? Let's find out.
Recapping the Fiji GPU and High Bandwidth Memory
Because of AMD's trickled-out offense with the release of the Fury X, we already know much about the HBM design and the Fiji GPU. HBM is a fundamental shift in how memory is produced and utilized by a GPU. From our original editorial on HBM:
The first step in understanding HBM is to understand why it’s needed in the first place. Current GPUs, including the AMD Radeon R9 290X and the NVIDIA GeForce GTX 980, utilize a memory technology known as GDDR5. This architecture has scaled well over the past several GPU generations but we are starting to enter the world of diminishing returns. Balancing memory performance and power consumption is always a tough battle; just ask ARM about it. On the desktop component side we have much larger power envelopes to work inside but the power curve that GDDR5 is on will soon hit a wall, if you plot it far enough into the future. The result will be either drastically higher power consuming graphics cards or stalling performance improvements of the graphics market – something we have not really seen in its history.
Historically, when technology comes to an inflection point like this, we have seen the integration of technologies on the same piece of silicon. In 1989 we saw Intel move cache and floating point units onto the processor die, in 2003 AMD was the first to merge the north bridge and memory controller into a design, then graphics, the south bridge even voltage regulation – they all followed suit.
The answer for HBM is an interposer. The interposer is a piece of silicon that both the memory and processor reside on, allowing the DRAM to be in very close proximity to the GPU/CPU/APU without being on the same physical die. This close proximity allows for several very important characteristics that give HBM the advantages it has over GDDR5. First, this proximity allows for extremely wide communication bus widths. Rather than 32-bits per DRAM we are looking at 1024-bits for a stacked array of DRAM (more on that in minute). Being closer to the GPU also means the clocks that regulate data transfer between the memory and processor can be simplified, and slower, to save power and complication of design. As a result, the proximity of the memory means that the overall memory design and architecture can improve performance per watt to an impressive degree.
So now that we know what an interposer is and how it allows the HBM solution to exist today, what does the high bandwidth memory itself bring to the table? HBM is DRAM-based but was built with low power consumption and ultra wide bus widths in mind. The idea was to target a “wide and slow” architecture, one that scales up with high amounts of bandwidth and where latency wasn’t as big of a concern. (Interestingly, latency was improved in the design without intent.) The DRAM chips are stacked vertically, four high, with a logic die at the base. The DRAM die and logic die are connected to each other with through silicon vias, small holes drilled in the silicon that permit die to die communication at incredible speeds. Allyn taught us all about TSVs back in September of 2014 after a talk at IDF and if you are curious in how this magic happens, that story is worth reading.
The first iteration of HBM on the flagship AMD Radeon GPU will include four stacks of HBM, a total of 4GB of GPU memory. That should give us in the area of 500 GB/s of total bandwidth for the new AMD Fiji GPU; compare that to the R9 290X today at 320 GB/s and you’ll see a raw increase of around ~56%. Memory power efficiency improves at an even great rate: AMD claims that HBM will result in more than 35 GB/s of bandwidth per watt of power consumption by the memory system while GDDR5 only gets over 10 GB/s.
AMD has sold me on HBM for high end GPUs, I think that comes across in this story. I am excited to see what AMD has built around it and how this improves their competitive stance with NVIDIA. Don’t expect to see dramatic decreases in total power consumption with Fiji simply due to the move away from GDDR5, though every bit helps when you are trying to offer improved graphics performance per watt. How a 4GB limit to the memory system of a flagship card in 2015-2016 will pan out is still a question to be answered but the additional bandwidth it provides offers never before seen flexibility to the GPU and software developers.
And from Josh's recent Fiji GPU architectural overview:
AMD leveraged HBM to feed their latest monster GPU, but there is much more to it than memory bandwidth and more stream units.
HBM does require a new memory controller as compared to what was utilized with GDDR5. There are 8 new memory controllers on Fiji that interface directly with the HBM modules. These are supposedly more simple than what we have seen with GDDR5 due to not having to work at high frequencies. There is also the logic chips at the base of the stacked modules and the less exotic interface needed to address those units as again compared to GDDR5. The changes have resulted in higher bandwidth, lower latency, and lower power consumption as compared to previous units. It also likely means a smaller amount of die space needed for these units.
Fiji also improves upon what we first saw in Tonga. It can do as many theoretical primitives per clock (4) as Tonga, but AMD has improved the geometry engine so that the end result will be faster than what we have seen previously. It will have a per clock advantage over Tonga, but we have yet to see how much. It shares the 8 wide ACE (Asynchronous Compute Engine) that is very important in DX12 applications which can leverage them. The ACE units can dispatch a large amount of instructions that can be of multiple types and further leverage the parallelization of a GPU in that software environment.
The chips features 4 shader engines each with its own geometry processor (each processor improved from Tonga). Each shader engine features 16 compute units. Each CU again holds 4 x 16 vector units plus a single scalar unit. AMD categorizes this as a 4096 stream unit processor. The chip has the xDMA engine for bridgeless CrossFire, the TrueAudio engine for DSP accelerated 3D audio, and the latest VCE and UVD accelerators for video. Currently the video decode engine supports up to H.265, but does not handle VP9… yet.
In terms of stream units it is around 1.5X that of Hawaii. The expectation off the bat would be that the Fiji GPU will consume 1.5X the power of Hawaii. This, happily for consumers, is not the case. Tonga improved on power efficiency to a small degree with the GCN architecture, but it did not come close to matching what NVIDIA did with their Maxwell architecture. With Fiji it seems like AMD is very close to approaching Maxwell.
Fiji includes improved clock gating capabilities as compared to Tonga. This allows areas not in use to go to a near zero energy state. AMD also did some cross-pollination from their APU group with power flow. Voltage adaptive operations only apply the necessary voltage that is needed to complete the work for a specific unit. My guess is that there are hundreds, if not thousands, of individual sensors throughout the die that provide data to a central controller that handles voltage operations across the chip. It also figures out workloads so that it doesn’t overvolt a particular unit more than it needs to to complete the work.
The chip can dispatch 64 pixels per clock. This gets important for resolutions of 4K because those pixels need to be painted somehow. The chip includes 2 MB of L2 cache, which is double of the previous Hawaii. This goes back to the memory subsystem and 4 GB of memory. A larger L2 cache is extremely important for consistently accessed data for the compute units. It also helps tremendously in GPGPU applications.
Fiji is certainly an iteration of the previous GCN architecture. It does not add a tremendous amount of features to the line, but what it does add is quite important. HBM is the big story as well as the increased power efficiency of the chip. Combined this allows a nearly 600 sq mm chip with 4GB of HBM memory to exist at a 275 watt TDP that exceeds that of the NVIDIA Titan X by around 25 watts.
Now that you are educated on the primary changes brought forth by the Fiji architecture itself, let's look at the Fury X implementation.
AMD Radeon R9 Fury X Specifications
AMD has already announced that the flagship Radeon R9 Fury X is going to have some siblings in the not-too-distant future. That includes the R9 Fury (non-X) that partners will sell with air cooling as well as a dual-GPU variant that will surely be called the AMD Fury X2. But for today, the Fury X stands alone and has a very specific target market.
R9 Fury X | GTX 980 Ti | TITAN X | GTX 980 | TITAN Black | R9 290X | |
---|---|---|---|---|---|---|
GPU | Fiji | GM200 | GM200 | GM204 | GK110 | Hawaii XT |
GPU Cores | 4096 | 2816 | 3072 | 2048 | 2880 | 2816 |
Rated Clock | 1050 MHz | 1000 MHz | 1000 MHz | 1126 MHz | 889 MHz | 1000 MHz |
Texture Units | 256 | 176 | 192 | 128 | 240 | 176 |
ROP Units | 64 | 96 | 96 | 64 | 48 | 64 |
Memory | 4GB | 6GB | 12GB | 4GB | 6GB | 4GB |
Memory Clock | 500 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 5000 MHz |
Memory Interface | 4096-bit (HBM) | 384-bit | 384-bit | 256-bit | 384-bit | 512-bit |
Memory Bandwidth | 512 GB/s | 336 GB/s | 336 GB/s | 224 GB/s | 336 GB/s | 320 GB/s |
TDP | 275 watts | 250 watts | 250 watts | 165 watts | 250 watts | 290 watts |
Peak Compute | 8.60 TFLOPS | 5.63 TFLOPS | 6.14 TFLOPS | 4.61 TFLOPS | 5.1 TFLOPS | 5.63 TFLOPS |
Transistor Count | 8.9B | 8.0B | 8.0B | 5.2B | 7.1B | 6.2B |
Process Tech | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm |
MSRP (current) | $649 | $649 | $999 | $499 | $999 | $329 |
The most impressive specification that comes our way is the stream processor count, sitting at 4,096 for the Fury X, an increase of 45% when compared to the Hawaii GPU used in the R9 290X. Clock speeds didn't decrease either to get to this implementation which means that gaming performance has the chance to be substantially improved with Fiji. Peak compute capability jumps from 5.63 TFLOPS to an amazing 8.6 TFLOPS with Fiji, easily outpacing even the NVIDIA GeForce GTX Titan X rated at 6.14 TFLOPS.
Texture units also increased by the same 45% amount but there is a question on the ROP count. With only 64 render back ends present on Fiji, the same amount as the Hawaii XT GPU used on the R9 290X, the GPUs capability for final blending might be in question. It's possible that AMD feels that the ROP performance of Hawaii was overkill for the pixel processing capability it provided and thus thought the proper balance was found in preserving the 64 ROPs count on Fiji. I think we'll find some answers in our benchmarking and testing going forward.
With 4GB on board, a limitation of the current generation of HBM, the AMD Fury X stands against the GTX 980 Ti with 6GB and the Titan X with 12GB. Heck, even the new Radeon R9 390X and 390 ship with 8GB of memory. That presents another potential problem for AMD's Fiji GPU: will the memory bandwidth and driver improvements made be enough to counter the smaller frame buffer size of Fury X compared to its competitors? AMD is well aware of this but believes that a combination of the faster memory interface and "tuning every game" to ensure that the 4GB memory limit will prevent the bottleneck. AMD noted that the GPU driver is what is responsible for memory allocation and technologies like memory compression and caching can drastically impact memory footprints.
While I agree that the HBM implementation should help things, I don't think it's automatic; GDDR5 and HBM don't differ by that much in net bandwidth or latency. And while tuning for each game will definitely be important, that puts a lot of pressure on AMD's driver and developer relations teams to get things right on day one of every game's release.
At 512 GB/s, the AMD Fury X exceeds the available bandwidth of the GTX 980 Ti by 52%, even with a rated memory clock speed of just 500 MHz. That added memory performance should allow AMD to be more flexibile with memory allocation, but drivers will definitely have to be Fiji-aware to change how it brings in data to the system.
Fury X's TDP of just 275 watts, 15 watts lower than the Radeon R9 290X, says a lot for the improvement in efficiency that Fiji offers over Hawaii. However, the GTX 980 Ti still runs at a lower 250 watts; I'll be curious to see how this is reflected in our power testing later.
Just as we have seen with NVIDIA's Maxwell design, the 28nm process is being stretched to its limits with Fiji. A chip with 8.9 billion transistors is no small feat, running past the GM200 by nearly a billion (and even that was astonishing when it launched).
Overhyped TURD fails to
Overhyped TURD fails to perform just like Bulldozer.
just no, bulldozer was a
just no, bulldozer was a disaster compared to sandy bridge, much slower, Fury X is very close to TitanX/980 ti, it fails to be “the fastest” but not by much, and power usage and price are not an epic fail… not even close to bulldozer.
Agreed. For me it was
Agreed. For me it was Barcelona was a fail. Fury is at least competitive- and a little more efficient than where AMD was going.
Close in performance with
Close in performance with only 4GB of HBM memory, and the HBM2 stacks will probably be an almost drop in replacement for the HBM stacks on the interposer with some increase in clock speeds BIOS changes, settings. And expect the drivers to be tuned and fixes to become available now that Fiji has been released, and some incremental improvements in performance enabled. AMD did OK for a new memory technology being introduced, and a not so complete reworking of the GPUs microarchitecture, so what will HBM2 and a completely new GPU microarchitecture bring, hopefully AMD will continue with its rapid pace of improvements relative to its previous generations, and be able to keep up with Nvidia’s performance metrics for gaming, AMD already has better FP performance for other workloads if the users want the AMD GPU for other tasks.
By that metric all nvidia GPU
By that metric all nvidia GPU have also TURDS…
In some (but ok, not all) games the Fury X is faster then the $1000 Titan X and the equally priced 980 ti. The two faster card nvidia ever made.
Bulldozer was as bad as your
Bulldozer was as bad as your shitty analogy this card is very competitive (all be it slower) with the 980ti. Slight price tweak and it’s worth considering.
I had an 8350 for the longest
I had an 8350 for the longest while but a few months ago switch over to intel with the 5820k because i wanted more performance. I was looking forward to this card, i like amd and where they want to move the industry, but they keep releasing lackluster products probably due to a smaller R&D team, but at the end of the day i have to base my purchase on the product itself not sympathy. For now i’ll stick with my 2 290’s but i have a feeling i’m going nvidia when they release a new card down the line
AMD did do some good things
AMD did do some good things in the card. Power draw is in a better place though its still to be seen how draw will increase with an Air cooler cause the hotter temps that will cause leaking. Fury made some steps in right direction but still ways to go.
ever thought about how if amd
ever thought about how if amd put so mutch resources into developing hbm and the positive ripple effect that has for other companies like nvidia and even intel when cpus start using it that they are purposely capping the performance on there products to let the more expensive less innovative brands have a artificial advantage since they use those could be using profits to contribute to amd’s research good example is how mantle seemed to be a type of beta dx12 and the reverse could be true to amd could help fund tech that helps them to like in areas of power efficiency
A 103 word sentence with no
A 103 word sentence with no punctuation. We may have just hit a new low in readability.
You counted?!
You counted?!
You counted?!
You counted?!
One apostrophe ruined the
One apostrophe ruined the perfect score.
A good and fair review Ryan.
A good and fair review Ryan. Keep up the good work!
This review just extended the
This review just extended the life of the 3.5 970gtx by another year. Very sad as I was hoping for some competition.
Probably safe to wait until the 1000 series ti $250 card release 2016/2017?
Going to be interesting to see what is more relevant, VR, 144hz, or 4K.
I’m guessing VR & 144 for the foreseeable future, as 144-4k doesn’t seem possible for another 8 years because this HBM memory isn’t really moving the needle. Event if it improved 2X, 144-4k is not happening.
144hz is a freakin WASTE of
144hz is a freakin WASTE of resources in my opinion. I would much rather have 1/3 the framerate and 3 times the fidelity. No AAA game on the planet is going to run at 144hz with even 2 top end cards so why bother. 4k is better but than 144hz but to build a 4k system right now, that is just stupid as it costs 2-3 time more money then 1080p/1440p to get comparable framerates.
144hz can go to hell haha. VR will be in if they can make content for it. Kinda how 3D is dying now as well as Xbox Kinect.
You are smart to just hold onto the 970 for now but there is competition..
Whether it’s a waste or not
Whether it’s a waste or not is for individuals to decide, if you think it’s a waste fine don’t buy it, but obviously not everyone feels the same including myself, I’d prefer smoothness to graphical fidelity any day.
I am just passionate about
I am just passionate about the balance between resolution, fidelity, and smoothness. It has to exist. 144hz isn’t a terrible thing to have as a limit but I want DEVs to make games that are going to demand that at MAX settings and 1440p or so, 60fps will be a rough target.
Think about ALL the effects, polygons, view distance, tessellation, AA, AF, AO etc etc etc that would be dramatically lower if we aimed solely for 144hz. If you want to turn your graphics down to low and run at 144hz go ahead, but don’t drag me down with you.
R9 Fury X=8.60TFLOPS TITAN
R9 Fury X=8.60TFLOPS TITAN X=6.14 TFLOPS that’s 2.46 TFLOPS faster with performance 30% slower or more on mainstream display sizes is this why the small formfacter fury x2 at e3 was powered with an intel cpu? is cpu more of a bottle nek then before with hbm? that is something i cant help but wonder would mean drivers and cpus may have to accommodate to a completely new groundbreaking hbm demands
important point
Fury X 64ROPs
important point
Fury X 64ROPs @ 1050MHz
Titan X 96ROPs @ 1100MHz+
All others reviews i see put
All others reviews i see put FuryX at same FPS as 980ti in 4K even with 4G memory, and fall so little as 2,3 fps in lower resolutions.
But remember guys is early driver and it’s a card for DX12 at 4K.
I really think that AMD wins here. It still competitive in DX11. for me fury x is more future prof. next year is hbm2 and pascal, so maybe in the end of 2016 AMD came with a answer, but at there nvidia only have 980ti, but win10 is on corner with DX12.
For the price it should offer
For the price it should offer more. I’m guessing that nVidia to some extent is laughing back side off (considering the level of expectations). I was really hoping for something interesting that will smack nVidia team green straight in the face at least for a little while. Just for sheer change of pace.
It’s good product. It has one good selling point. For the power it’s half length (more or less) of typical non-HBM monsters. Length wise it looks like ISA Trident 256/512KB.
I get the sneaky feeling that nVidia will wipe the floor with AMD when they bring 2nd gen HBM to the masses with Pascal or Volta or whatever name.
Ryan,
Is there any
Ryan,
Is there any expectation that the overclocking tools for the Fury/FuryX will improve?
I honestly don’t know. The
I honestly don't know. The only other options that we have been pointed towards used regedit. So…no.
Now that the initial hype has
Now that the initial hype has fallen flat, here comes another dose, the “it will get much better when drivers are optimized” brand of BS. It just never stops…
AMD, for all the talk of bad
AMD, for all the talk of bad drivers they get, actually improves their drivers over time. Nvidia does not (much). That’s why the 780ti is now equivalent to a 7970 (in a gameworks title, no less):
http://awful.pictures/uploader/image.php?i=L2Uzmtcj8z
Since the fury x is faster
Since the fury x is faster already in demanding games, having better driver will make it a clear winner across the board.
But, yea, AMD didn’t do a clean sweep, but hey its $300 cheaper then a Titan X and same price as a 980 ti.
Also this architecture is compatible with all freesync monitors, and is VR ready.
I will just leave this
I will just leave this here
https://www.youtube.com/watch?v=xhVo7yPjQvE
So true XD
OMG LOL
OMG LOL
Will there be any chance to
Will there be any chance to run any compute benchmarks on the Fury X?
I can’t find a single site that run an GPGPU benchmarks. I’d be curious to see if Fury X has improved in anyway in SP/DP verse the previous generation.
Maybe on some folding at
Maybe on some folding at home, or other blogs for number crunching, they also could do some Blender Render benchmarks, including Cycles rendering now that Blender is getting support for AMD GPUs and cycles rendering. There will still be more heavy gaming benchmarking done over the next few months, including any DX12, or hopefully Vulkan enabled Steam OS based games. A lot of Mantle, DX12 and Vulkan benchmarks when the newest graphics APIs come online. Man those late night pizza deliveries should be going strong at a lot of gaming, OS, and benchmarking website offices, and will be for some time with all the new hardware technology, and Graphics API/middleware software and drivers have to be written, debugged, and tweaked.
Maybe on some folding at
Maybe on some folding at home, or other blogs for number crunching, they also could do some Blender Render benchmarks, including Cycles rendering now that Blender is getting support for AMD GPUs and cycles rendering. There will still be more heavy gaming benchmarking done over the next few months, including any DX12, or hopefully Vulkan enabled Steam OS based games. A lot of Mantle, DX12 and Vulkan benchmarks when the newest graphics APIs come online. Man those late night pizza deliveries should be going strong at a lot of gaming, OS, and benchmarking website offices, and will be for some time with all the new hardware technology, and Graphics API/middleware software and drivers have to be written, debugged, and tweaked.
Goddamn it Pcper, you posting
Goddamn it Pcper, you posting system is the dog’s mangy jewels once the length gets over one page, and the system breaks down “Page not found errors” and other errant behavior. Please remove the double post.
The results are pretty much a
The results are pretty much a wash, I think it’s going to come down to features and bias for purchasing decisions as there is no clear victor (give or take a few percentage points)
I would like to see AMD’s Adaptive Sync tech improve as that is the only major feature that could swing the tide for an unbiased customer. Maybe some new tests when more monitors are released?
I’ll be in the market for a 980ti in a bit as I’m not parting with GSync anytime soon though.
Thanks for the great review Ryan. 😀
(on a side note, any 21:9 3440 by 1440 reviews/results coming out in the future? I’m interested in this for a future monitor purchase)
After looking and at this
After looking and at this review i’m almost sure there is some kind of driver overheat cause at the reviews of bit-tech , tomshardware and Guru3d with the newer platforms and faster ram and cpus the Fury is not only much closer to 980Ti but in the 4K tests is scoring above it in most of the tests.Now for example at overclock3d and here with older platform the gap between Fury X and 980Ti is much wider , that wider that the Fury is barely taking close to 980Ti in 4K.So to me it looks that we gonna actually wait till we see better drivers to see proper benching.
The problem is, if amd really
The problem is, if amd really wanted to make a mark, they should have at least waiting for proper new drivers to be launch along side of this, most consumers won’t care that it could be better in the future, they just want whats best now.
The problem is, if amd really
The problem is, if amd really wanted to make a mark, they should have at least waiting for proper new drivers to be launch along side of this, most consumers won’t care that it could be better in the future, they just want whats best now.
I highly doubt that your GTX
I highly doubt that your GTX 980 Ti and Titan X run at 1000 Mhz. They should at least run at normal boost clock without problems,probably even with a higher clockspeed thanks to GPU Boost 2.0.
The Fury X is actually a pretty good result for AMD, but the limited amount of benchmarks on most tech sites and the different interpretation of the results lead to a lot of different opinions. Some reviews say the Fury X beats the Titan X, because they focus on 4K games and in the games benchmarked, AMD cards don’t suck.
Other review sites see a parity between GTX 980 Ti and Fury X, because they benchmark a number of games where both sides take victories. PCPer, which has always claimed to be pro-Nvidia since I read the comments on this webpage, seems to see the Fury X as a bad card.
If only Crysis Warhead, Crysis 3, Bioshock Infinite, Far Cry 4 and AC Unity were benched, then the Fury X would be the Titan X in all cases. AMD even told PCPer that due to HBM being a new thing the drivers needed to be optimized speficially for each game.
See the following section; AMD told me this week that the driver would have to be tuned “for each game”. This means that AMD needs to dedicate itself to this cause if it wants Fury X and the Fury family to have a nice, long, successful lifespan.”
If you asked me today which card is faster, the AMD Fury X or the GTX 980 Ti from NVIDIA, I would definitely tell you the GeForce card.
This section is honestly where I get upset with PCPer. Why such bias? The Fury X does perform equaly well in many cases despite having only 4GB HBM. It runs cooler and quieter (even the pump noise can be reduced to a bare minimum by pressuring the cooler with some soft foam between liquid cooler and GPU shroud case. It is smaller than any other high-end GPU out there. With updated drivers it probably will unleash lots of it’s currently not fully used raw power, just like the R9 290 in the past generation.
PS: I think the inclusion of Skyrim in the benchmarks is a joke.
At 4K I agree that its a
At 4K I agree that its a personal preference (because some games are ~15% faster on the Fury Vs 980 ti, and vice versa).
Now if you plan to use an async monitor, the fury is the clear winner. (from the recent monitor support of freesync VS gsync)
My take. if you already have a 4K monitor with Gsync, go with a GTX 980 ti.
If you plan to get a new 4K monitor, the Fury X seem like the better option (more choices, and more affordable)
The big question is, Will Dx12 game level playing field.
Because at 1080p nvidia got a clear advantage with dx11…
Probably,but the question is
Probably,but the question is when a large wave of DX12 games is coming,they won’t sell fury series anymore.
Hey Ryan,
Great review. I
Hey Ryan,
Great review. I wasn’t expecting the Fury X PRO to beat 980Ti but I would say I’m glad AMD is or trying to competing at this level. The Fury X PRO seems to be a proof of concept with HBM integrated and while it didn’t please alot of people, this is great for the industry and kudos to them to bringing it out. One problem I would say that AMD won’t admit is the 4GB HBM limit. While it is amazing to see a 4GB card competing with a 6GB card, i think it’s sweating a little in 4K. All in all, i think everyone should just thank AMD for bringing out HBM.
Once again, great review Ryan.
I wonder if the FUry X will
I wonder if the FUry X will show an improvement under windows 10 with Direct X 12. It seems to me, ignoring the accepted wisdom that 4GB isn’t enough to really push 4k, that with DX12’s new tech and schema for dealing with textures, the wider bus and higher transfer rates of HBM, and better threaded workload that Fury X may have a larger performance gain than traditional cards with GDDR. Not to mention AMD’s supposedly improved compression and iterative driver advances we should see.
Has anyone done any Mantle testing to see what change there is there between it and D3D11? And do we have an idea how Direct X 12 will change the performance with the new architecture/HBM and the new avenues of pushing the GPU via the new API?
And on HDMI 2, the ports are the same are they not? IS this something that could later be changed with a firmware update? But, if AMD were to offer a coupon for say 50% off an adapter for anyone who bought a Fury X it would probably ease the butt hurt of those wanting HDMI 2 connectivity.
HERE IS BENCHMARK!!! DONT
HERE IS BENCHMARK!!! DONT SAYING THAT IS 980Ti BETTER THAN FURY X,LOOK FANBOYS:
http://www.forbes.com/sites/jasonevangelho/2015/06/18/amd-radeon-fury-x-benchmarks-full-specs-new-fiji-graphics-card-beats-nvidias-980-ti/
YOUR Ti IS SHIT LIKE ALL OTHERS NVIDIA CARDS…
AND FOR END,THE MOST POWERFULL CARDS ON THE WORLD IS?
FURY X x2!!! SECOND IS R9 295X2!!! AND THESE CARDS IS NOT LOUD LIKE TITAN Z WITCH SOUNDS LIKE YOU HAVE TRACTOR IN THE CASE (ALL NVIDIA CARDS IS LOUD LIKE TRACTOR)AND FANBOYS WILL SAY NOW: NVIDIA HAVE LOWER TDP! YEEEAAAH!!! WOOOOW 😀 WHO CARES ABOUT THIS??? REALL GAMER HAVE EXTRA PSU!!! FANBOYS,BUY SOME GOD PSU..IS CHEAP…
AND LOOK THE PRICE OF NVIDIA CARDS!!! IS MORE EXPENSIVE OF ANY AMD CARD AND IS SLOWER AND WEAK!!! 1000$ FOR TITAN BLACK WHICH IS MUCH,MUCH WEAKER THAN FURY X!!! ONLY IDIOT WILL BE BUY THIS… AND DONT HAVE WATER COOLING! AMD CARDS HAVE WATTER COOLING,IS ULTRA STRONGER THAN ANY NVIDIA CARD AND IS MUCH,MUCH,MUCH CHEAPER…AND IS QUIET,IS NOT LOUD LIKE NVIDA..SOUNDS LIKE YOU HAVE TRACTOR IN THE CASE.. SIMPLE,NVIDIA IS BIG SHIT!!!! AND ONLY NOOBS BUYING THIS SHIT!!!!
This is the best. comment.
This is the best. comment. ever.
So this kind of comments are
So this kind of comments are allowed on here now???
Why not removed and banned?
In a Crossfire arrangement,
In a Crossfire arrangement, where do I put the two liquid coolers in a standard case? One on the rear and one on the side panel… if a cooling port is available on the side-panel? Also the cooler thickness is such that the rear mounted one seems likely to collide with one or more of the quad-pair memory sticks in a X99 ATX motherboard?
Rear and bottom, rear and
Rear and bottom, rear and top, rear and front… all depends on the case.
And what about FURY
And what about FURY performing 33% better than Ti under DIRECTX12…¬¬
http://i.imgur.com/kKLCcAr.png
Driver overhead…
Driver overhead…