A fury unlike any other…
It’s finally time to talk specifics here people – can the new AMD Fury X rival the performance of the GeForce GTX 980 Ti?
Officially unveiled by AMD during E3 last week, we are finally ready to show you our review of the brand new Radeon R9 Fury X graphics card. Very few times has a product launch meant more to a company, and to its industry, than the Fury X does this summer. AMD has been lagging behind in the highest-tiers of the graphics card market for a full generation. They were depending on the 2-year-old Hawaii GPU to hold its own against a continuous barrage of products from NVIDIA. The R9 290X, despite using more power, was able to keep up through the GTX 700-series days, but the release of NVIDIA's Maxwell architecture forced AMD to move the R9 200-series parts into the sub-$350 field. This is well below the selling prices of NVIDIA's top cards.
The AMD Fury X hopes to change that with a price tag of $650 and a host of new features and performance capabilities. It aims to once again put AMD's Radeon line in the same discussion with enthusiasts as the GeForce series.
The Fury X is built on the new AMD Fiji GPU, an evolutionary part based on AMD's GCN (Graphics Core Next) architecture. This design adds a lot of compute horsepower (4,096 stream processors) and it also is the first consumer product to integrate HBM (High Bandwidth Memory) support with a 4096-bit memory bus!
Of course the question is: what does this mean for you, the gamer? Is it time to start making a place in your PC for the Fury X? Let's find out.
Recapping the Fiji GPU and High Bandwidth Memory
Because of AMD's trickled-out offense with the release of the Fury X, we already know much about the HBM design and the Fiji GPU. HBM is a fundamental shift in how memory is produced and utilized by a GPU. From our original editorial on HBM:
The first step in understanding HBM is to understand why it’s needed in the first place. Current GPUs, including the AMD Radeon R9 290X and the NVIDIA GeForce GTX 980, utilize a memory technology known as GDDR5. This architecture has scaled well over the past several GPU generations but we are starting to enter the world of diminishing returns. Balancing memory performance and power consumption is always a tough battle; just ask ARM about it. On the desktop component side we have much larger power envelopes to work inside but the power curve that GDDR5 is on will soon hit a wall, if you plot it far enough into the future. The result will be either drastically higher power consuming graphics cards or stalling performance improvements of the graphics market – something we have not really seen in its history.
Historically, when technology comes to an inflection point like this, we have seen the integration of technologies on the same piece of silicon. In 1989 we saw Intel move cache and floating point units onto the processor die, in 2003 AMD was the first to merge the north bridge and memory controller into a design, then graphics, the south bridge even voltage regulation – they all followed suit.
The answer for HBM is an interposer. The interposer is a piece of silicon that both the memory and processor reside on, allowing the DRAM to be in very close proximity to the GPU/CPU/APU without being on the same physical die. This close proximity allows for several very important characteristics that give HBM the advantages it has over GDDR5. First, this proximity allows for extremely wide communication bus widths. Rather than 32-bits per DRAM we are looking at 1024-bits for a stacked array of DRAM (more on that in minute). Being closer to the GPU also means the clocks that regulate data transfer between the memory and processor can be simplified, and slower, to save power and complication of design. As a result, the proximity of the memory means that the overall memory design and architecture can improve performance per watt to an impressive degree.
So now that we know what an interposer is and how it allows the HBM solution to exist today, what does the high bandwidth memory itself bring to the table? HBM is DRAM-based but was built with low power consumption and ultra wide bus widths in mind. The idea was to target a “wide and slow” architecture, one that scales up with high amounts of bandwidth and where latency wasn’t as big of a concern. (Interestingly, latency was improved in the design without intent.) The DRAM chips are stacked vertically, four high, with a logic die at the base. The DRAM die and logic die are connected to each other with through silicon vias, small holes drilled in the silicon that permit die to die communication at incredible speeds. Allyn taught us all about TSVs back in September of 2014 after a talk at IDF and if you are curious in how this magic happens, that story is worth reading.
The first iteration of HBM on the flagship AMD Radeon GPU will include four stacks of HBM, a total of 4GB of GPU memory. That should give us in the area of 500 GB/s of total bandwidth for the new AMD Fiji GPU; compare that to the R9 290X today at 320 GB/s and you’ll see a raw increase of around ~56%. Memory power efficiency improves at an even great rate: AMD claims that HBM will result in more than 35 GB/s of bandwidth per watt of power consumption by the memory system while GDDR5 only gets over 10 GB/s.
AMD has sold me on HBM for high end GPUs, I think that comes across in this story. I am excited to see what AMD has built around it and how this improves their competitive stance with NVIDIA. Don’t expect to see dramatic decreases in total power consumption with Fiji simply due to the move away from GDDR5, though every bit helps when you are trying to offer improved graphics performance per watt. How a 4GB limit to the memory system of a flagship card in 2015-2016 will pan out is still a question to be answered but the additional bandwidth it provides offers never before seen flexibility to the GPU and software developers.
And from Josh's recent Fiji GPU architectural overview:
AMD leveraged HBM to feed their latest monster GPU, but there is much more to it than memory bandwidth and more stream units.
HBM does require a new memory controller as compared to what was utilized with GDDR5. There are 8 new memory controllers on Fiji that interface directly with the HBM modules. These are supposedly more simple than what we have seen with GDDR5 due to not having to work at high frequencies. There is also the logic chips at the base of the stacked modules and the less exotic interface needed to address those units as again compared to GDDR5. The changes have resulted in higher bandwidth, lower latency, and lower power consumption as compared to previous units. It also likely means a smaller amount of die space needed for these units.
Fiji also improves upon what we first saw in Tonga. It can do as many theoretical primitives per clock (4) as Tonga, but AMD has improved the geometry engine so that the end result will be faster than what we have seen previously. It will have a per clock advantage over Tonga, but we have yet to see how much. It shares the 8 wide ACE (Asynchronous Compute Engine) that is very important in DX12 applications which can leverage them. The ACE units can dispatch a large amount of instructions that can be of multiple types and further leverage the parallelization of a GPU in that software environment.
The chips features 4 shader engines each with its own geometry processor (each processor improved from Tonga). Each shader engine features 16 compute units. Each CU again holds 4 x 16 vector units plus a single scalar unit. AMD categorizes this as a 4096 stream unit processor. The chip has the xDMA engine for bridgeless CrossFire, the TrueAudio engine for DSP accelerated 3D audio, and the latest VCE and UVD accelerators for video. Currently the video decode engine supports up to H.265, but does not handle VP9… yet.
In terms of stream units it is around 1.5X that of Hawaii. The expectation off the bat would be that the Fiji GPU will consume 1.5X the power of Hawaii. This, happily for consumers, is not the case. Tonga improved on power efficiency to a small degree with the GCN architecture, but it did not come close to matching what NVIDIA did with their Maxwell architecture. With Fiji it seems like AMD is very close to approaching Maxwell.
Fiji includes improved clock gating capabilities as compared to Tonga. This allows areas not in use to go to a near zero energy state. AMD also did some cross-pollination from their APU group with power flow. Voltage adaptive operations only apply the necessary voltage that is needed to complete the work for a specific unit. My guess is that there are hundreds, if not thousands, of individual sensors throughout the die that provide data to a central controller that handles voltage operations across the chip. It also figures out workloads so that it doesn’t overvolt a particular unit more than it needs to to complete the work.
The chip can dispatch 64 pixels per clock. This gets important for resolutions of 4K because those pixels need to be painted somehow. The chip includes 2 MB of L2 cache, which is double of the previous Hawaii. This goes back to the memory subsystem and 4 GB of memory. A larger L2 cache is extremely important for consistently accessed data for the compute units. It also helps tremendously in GPGPU applications.
Fiji is certainly an iteration of the previous GCN architecture. It does not add a tremendous amount of features to the line, but what it does add is quite important. HBM is the big story as well as the increased power efficiency of the chip. Combined this allows a nearly 600 sq mm chip with 4GB of HBM memory to exist at a 275 watt TDP that exceeds that of the NVIDIA Titan X by around 25 watts.
Now that you are educated on the primary changes brought forth by the Fiji architecture itself, let's look at the Fury X implementation.
AMD Radeon R9 Fury X Specifications
AMD has already announced that the flagship Radeon R9 Fury X is going to have some siblings in the not-too-distant future. That includes the R9 Fury (non-X) that partners will sell with air cooling as well as a dual-GPU variant that will surely be called the AMD Fury X2. But for today, the Fury X stands alone and has a very specific target market.
R9 Fury X | GTX 980 Ti | TITAN X | GTX 980 | TITAN Black | R9 290X | |
---|---|---|---|---|---|---|
GPU | Fiji | GM200 | GM200 | GM204 | GK110 | Hawaii XT |
GPU Cores | 4096 | 2816 | 3072 | 2048 | 2880 | 2816 |
Rated Clock | 1050 MHz | 1000 MHz | 1000 MHz | 1126 MHz | 889 MHz | 1000 MHz |
Texture Units | 256 | 176 | 192 | 128 | 240 | 176 |
ROP Units | 64 | 96 | 96 | 64 | 48 | 64 |
Memory | 4GB | 6GB | 12GB | 4GB | 6GB | 4GB |
Memory Clock | 500 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 5000 MHz |
Memory Interface | 4096-bit (HBM) | 384-bit | 384-bit | 256-bit | 384-bit | 512-bit |
Memory Bandwidth | 512 GB/s | 336 GB/s | 336 GB/s | 224 GB/s | 336 GB/s | 320 GB/s |
TDP | 275 watts | 250 watts | 250 watts | 165 watts | 250 watts | 290 watts |
Peak Compute | 8.60 TFLOPS | 5.63 TFLOPS | 6.14 TFLOPS | 4.61 TFLOPS | 5.1 TFLOPS | 5.63 TFLOPS |
Transistor Count | 8.9B | 8.0B | 8.0B | 5.2B | 7.1B | 6.2B |
Process Tech | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm |
MSRP (current) | $649 | $649 | $999 | $499 | $999 | $329 |
The most impressive specification that comes our way is the stream processor count, sitting at 4,096 for the Fury X, an increase of 45% when compared to the Hawaii GPU used in the R9 290X. Clock speeds didn't decrease either to get to this implementation which means that gaming performance has the chance to be substantially improved with Fiji. Peak compute capability jumps from 5.63 TFLOPS to an amazing 8.6 TFLOPS with Fiji, easily outpacing even the NVIDIA GeForce GTX Titan X rated at 6.14 TFLOPS.
Texture units also increased by the same 45% amount but there is a question on the ROP count. With only 64 render back ends present on Fiji, the same amount as the Hawaii XT GPU used on the R9 290X, the GPUs capability for final blending might be in question. It's possible that AMD feels that the ROP performance of Hawaii was overkill for the pixel processing capability it provided and thus thought the proper balance was found in preserving the 64 ROPs count on Fiji. I think we'll find some answers in our benchmarking and testing going forward.
With 4GB on board, a limitation of the current generation of HBM, the AMD Fury X stands against the GTX 980 Ti with 6GB and the Titan X with 12GB. Heck, even the new Radeon R9 390X and 390 ship with 8GB of memory. That presents another potential problem for AMD's Fiji GPU: will the memory bandwidth and driver improvements made be enough to counter the smaller frame buffer size of Fury X compared to its competitors? AMD is well aware of this but believes that a combination of the faster memory interface and "tuning every game" to ensure that the 4GB memory limit will prevent the bottleneck. AMD noted that the GPU driver is what is responsible for memory allocation and technologies like memory compression and caching can drastically impact memory footprints.
While I agree that the HBM implementation should help things, I don't think it's automatic; GDDR5 and HBM don't differ by that much in net bandwidth or latency. And while tuning for each game will definitely be important, that puts a lot of pressure on AMD's driver and developer relations teams to get things right on day one of every game's release.
At 512 GB/s, the AMD Fury X exceeds the available bandwidth of the GTX 980 Ti by 52%, even with a rated memory clock speed of just 500 MHz. That added memory performance should allow AMD to be more flexibile with memory allocation, but drivers will definitely have to be Fiji-aware to change how it brings in data to the system.
Fury X's TDP of just 275 watts, 15 watts lower than the Radeon R9 290X, says a lot for the improvement in efficiency that Fiji offers over Hawaii. However, the GTX 980 Ti still runs at a lower 250 watts; I'll be curious to see how this is reflected in our power testing later.
Just as we have seen with NVIDIA's Maxwell design, the 28nm process is being stretched to its limits with Fiji. A chip with 8.9 billion transistors is no small feat, running past the GM200 by nearly a billion (and even that was astonishing when it launched).
Dear Ryan
I can’t find a
Dear Ryan
I can’t find a driver version.
15.15-180612a-18565BE or
15.15-150611a-185358E
The PostRant about 4 posts
The PostRant about 4 posts back up with the link to some forbs site. Made me almost piss my self.HahahahaLOL.
I know right. As an AMD fan,
I know right. As an AMD fan, I am embarrassed.
is it me or the best
is it me or the best frametimes come from crossfire?
I think the actual Fiji is
I think the actual Fiji is just a preview on what is coming next year. 16 or 14 nm production and the new HBM memory in a wider variety of the cards. With the new production it should be even more efficient.
What about anti-aliasing,
What about anti-aliasing, with such an enormous memory bandwith, some over the top AA modes should come almost free ? 3Dmark Firestrike with some custom AA would test this fine ?
A person asked why are we
A person asked why are we dissapointed, let me summarize:
1. It is 9 months behind Maxwell
2. It is water cooled and has HBM
Even with that lead time, and hardware wise improvements, it is I would say, 1-2% behind the 980 Ti stock. And Fury
has absolutely no overclocking headroom. The 980 Ti can
overclock like a champ. Often getting 5-10% on air, which then makes it 5-10% faster then Fury at the same price point.
Amd can’t sell this for under $650 – not now. Because it has supply issues – and there is enough people willing to pay $650 for the scarce supply there is. I expect a price drop once supply stabilizes and they see that the market share they lost to Maxwell is not coming back. Not with this product.
Also, HBM is a new tech, and the water cooling looks is top notch, and the card has 8.9 billion transistors… I am pretty sure this card is expensive to make [maybe 10-20% more expensive than the 980 Ti]. And Amd needs more cash than Nvidia, so they really cannot price it too low, not now
I’d go for a 980 Gtx, they are hitting $450-500 now, and they can overclock 15% and its cool and quiet. Same performance as Fury.
Or better yet, we have been on 28 nm for 3 generations of cards, and HBM 2.0 is along the way and Nvidia is adopting it. Let’s wait for pascal 14/16 nm and HBM 2.0 on Nvidia’s superior architecture.
Sounds like I suck green goblin dick. And I really wish Amd delivered. But all that hype they always build before product launch and then flop [even tho they did reach some parity]. Hopefully driver optimizations will make it 3-7% faster which then puts it right on parity with an overclocked TI.
I agree with you for the most
I agree with you for the most part. AMD had an opportunity here to do one of three things to get a “win”. Performance 5-10% higher than 980ti, bundle a couple good upcoming games, or sell the card at $550. almost any one of those would have persuaded a bunch of people and two out of three would have won the show.
Instead and sadly, there was a large group of people on the fence and considering adopting red, but the fact that 980ti slightly outperforms Fury X TODAY for the same price, they have upgraded to the 980ti.
Sure some drivers and what not will definitely help the Fury X and probably have it surpass the 980ti but at some point you need to operate a company in the present and not a projected future.
With that said, ill enjoy my R9 290 a little while longer as it still does great on my 1080p monitor.
Almost another 2900XTX,AMD
Almost another 2900XTX,AMD needs a small die which can compete with 980 but only $299.
wow i dont think ive seen so
wow i dont think ive seen so many comments n arguments on a post like this b4 lol
Ehh the gsync freesync
Ehh the gsync freesync comments section is far worse.
oh really, i dont get the
oh really, i dont get the tribe mentality etc, it just get the best product for my budget
oh really, i dont get the
oh really, i dont get the tribe mentality etc, it just get the best product for my budget
I don’t want to nitpick too
I don’t want to nitpick too much here as an AMD fan. I get that 980ti has the win, but I think people need more knowledge on tweaking for AMD cards. For instance you used 4x MSAA in GTA-V which was well known to be a huge performance hit for AMD cards.
AMD fans need to take all reviews of video cards with a grain of salt because of the proprietary gameworks offerings. I’ll admit gameworks makes games look better but for whatever reason AMD doesn’t seem to address them until a week or so after a game comes out…or not at all. Things like MSAA aren’t super pretty anyways so they are usually replaced with another form of AA.
I picked settings for these
I picked settings for these games independent of any particular GPU, which is what you SHOULD when comparing apples to apples.
There is no Gameworks technology enabled in any of the games we used…?
Why do none of the reviewers
Why do none of the reviewers use MSI Afterburner? The Catalyst overclocking solution is terrible and always has been. Once you can unlock voltage, this card will hit 1250 core easily if not more. A true enthusiast would find a way to make this card pump out as many frames as possible. All these reviews seem biased so far, and quite frankly for all the talk nVidia fanboys spouted about heat and noise for AMD cards, this card is far cooler and far quieter than a 980 Ti.
Oh, you know the Fury X would
Oh, you know the Fury X would hit 1250 MHz, huh? Nice.
"Once I can unlock the voltage…" Totally agree. We actually need that to happen first before we can test it.
Spoken like a true AMD
Spoken like a true AMD fanboy, suuuure we believe you. Your words are so true and factual. lol
Ryan, quick question, do you
Ryan, quick question, do you think that this would mean that the ideal gaming experience would be 2x 980Ti in SLI?
I’m thinking that because when you did your 980 3 and 4 way SLI review, the frame timings were pretty bad for 3 and 4 way scaling – even if you discounted the idea that the third and fourth GPU did not offer much improvement in the way of frame rates.
For the Fury X, the weaknesses that I see are that:
– AMD’s Crossfire does scale better but, the problem is, it has only 4GB of VRAM
– Although the card does do better relative to the 980Ti at 4k, that’s where the extra VRAM is most needed
– Price is too high at $650 USD; $550 USD would be fair
– The 290X did not have as much OC headroom as the 980Ti, even with voltage you’d still be in the 1250-1300 MHz zone tops on the core and the 980Ti can do over 1500 MHz
In terms of design:
– It isn’t as power efficient due to the FP64
– They should have shipped this with 96 not 64 ROPs
– A second variant with 8GB would be needed
I think that if AMD addressed these 3 concerns, they’d have a strong card.
Thanks for the review.
Good comment here, solid
Good comment here, solid information and views.
As for the "best" gaming solution today, yeah, I would probably pick GTX 980 Ti SLI.
Another consideration is that
Another consideration is that there are already confirmations that 3 cards are coming out:
– MSI 980Ti Lightning (released on their Facebook page)
– EVGA 980Ti Classified
– Galax 980Ti HOF (already on sale on their website)
Considering how well cards like the Lightning have historically done, it might open up the possibility of even more OC headroom and overtake the Titan X.
Sadly AMD has confirmed no custom PCBs this round.
This sounds huge, a small
This sounds huge, a small difference in driver version and Fury is competetive against 980 Ti. Should this be confirmed http://www.reddit.com/r/pcmasterrace/comments/3b2ep8/fury_x_possibly_reviewed_with_incorrect_drivers/
This is just FUD as far as I
This is just FUD as far as I can tell. If there was a driver that would improve the Fury X performance by any amount today, AMD would be beating down our doors to get it tested again.
Hi Ryan, You might find the
Hi Ryan, You might find the below interesting?
https://translate.google.com/translate?hl=en&sl=ru&tl=en&u=http%3A%2F%2Fwww.ixbt.com%2Fvideo3%2Ffiji-part3.shtml
on that Fiji soundly beats the Ti/X, they speak about ta driver dated 18th.
Quote”
The accelerator AMD Radeon R9 Fury X 4096 MB 4096-bit HBM PCI-E – the most productive solution for today uniprocessor a top class game. Yes, just three weeks ago, we were talking about the same thing to the NVIDIA GeForce GTX 980 Ti, but now it is clear that the former king from NVIDIA lost his throne. It is equally important that the release of Fury, AMD has begun a new architecture using HBM-memory, which (architecture) will be applied not only in the GPU, but in the APU, making the built-in graphics are more powerful by increasing memory bandwidth and lack of need for using common memory. Yes, while locally installed only 4 gigabytes of memory, but the trouble started dashing. Already known plans to release a second generation – heirs of Fiji with increased memory. In the meantime, we see that the “first ball” came out very successful.”
http://www.ixbt.com/video3/fiji-part3.shtml
AMD Matt already debunked
AMD Matt already debunked that claim of an updated driver saying it doesn’t match their nomenclature. Which makes you wonder if that site’s results are valid at all. Probably some guy trolling AMD fanboys for site hits as its clearly an outlier that has results that fall way outside the norm.
http://forums.overclockers.co.uk/showthread.php?p=28230087#post28230087
I am personally just waiting for Xbitlabs results in favor of Fury X and the Russian website revival will be complete.
AMD Matt already debunked
AMD Matt already debunked that claim of an updated driver saying it doesn’t match their nomenclature. Which makes you wonder if that site’s results are valid at all. Probably some guy trolling AMD fanboys for site hits as its clearly an outlier that has results that fall way outside the norm.
AMDMatt:
“Yep, no.
For reference on what driver strings mean by the way:
15.15-150611a-185358E (This is the real driver we provided)
15.15 – is the branch
150611 – is the date of the build, YY/MM/DD
185358 – is the build request from our system to create this driver based off the information above
In this case the review is suggesting a driver dated from June 12th 2018 and the build request that has a letter instead of a number for its last digit so it’s either a lot of typos or someone being misleading on purpose. ”
I am personally just waiting for Xbitlabs results in favor of Fury X and the Russian website revival will be complete.
@Ryan lol exactly, if there
@Ryan lol exactly, if there was any validity to this claim AMD would have been screaming to stop the presses for these launch reviews.
Personally I am surprised they didn’t insist you guys turn off AF for all reviews as they did for their internal benchmarks. Because that wasn’t a huge red flag or anything! 😀
Grasping at any staws these
Grasping at any staws these fanboys, just wow…..poor plebs
I wish to see the same test
I wish to see the same test with Windows 10. Because the new WDDM 2.0 driver model, here things got really better with my 7970, not placebo effect.
You know how nVidia knew
You know how nVidia knew exactly how to position the 980ti? Because they have made a 4GB HBM card in house and tested the thing and knew the 980ti would best it by this amount. And they also could see the 4k writing on the wall.
I just hope this doesn’t mark the beginning of the end for AMD (of course if they do end up folding/ restructuring again/selling, the beginning of the end will said to have begun over a year ago).
Thanks to the Ryan for another great review. I basically don’t by hardware until it’s reviewed here and at HardOCP.
Yup bought myself a 980 Ti
Yup bought myself a 980 Ti for my 1440p gsync monitor and couldn’t be happier. Well worth every penny saved for it =D
Why would anyone listen to
Why would anyone listen to RooseBolton?! You back stabbing murderer! lol
But seriously, there is no way in hell NV did that. 🙂
Red Wedding baby! lol. The
Red Wedding baby! lol. The north remembers! oh, uh I guess that’s not good for me that the north remembers. Doh!
Ignorance is bliss. nvidia is
Ignorance is bliss. nvidia is a long way for getting HBM even running in any prototype form.
And nvidia simulation show a benefit going HBM, and thats why their next gen architecture , in 2016, will be HBM.
The 980 ti is louder & hotter then the Fury X, and not always faster.
Also outside of gaming the 980 ti is a dud… check compute benchmark, AMD architecture is absolute state of the art. (thats why Apple is currently using GCN in their highest end workstation products)
What else you got ?
It’s over man just give
It’s over man just give up….making yourself and other AMD fanboys look extremely ridiculous now. So sad….
GTX980 Ti is better than Fury X in ALL situations.
No one cares anymore
could this card work on a
could this card work on a 500w gold psu with 40A on a 12v rail?
I would like to see the
I would like to see the various iterations of the Fury X from manufacturers like MSI and Gigabyte, etc. These cards, as usual, feature better clocks and some tweaks, over the REFERENCE model. I guess the first generation of HBM limits them to 4GB,???, apparently folks are saying the next gen HBM2 will feature 8gb or whatever. If I were a gpu designer, I’d push for everything we can feasibly do at this point in time. I think sometimes they want to come out in increments that literally come up shorter than they need to. I would have pushed the architects to come up with a way to increase the memory available. I thought, as maybe they did, that the HBM would allow much faster throughput and the amount of memory would not be an issue.
Why don’t they compare the new 390X in these benchmarks???
a bit late to ask. Ryan what
a bit late to ask. Ryan what is Fiji DP rated at?
Not a horrible part, but as
Not a horrible part, but as usual AMD overpromises and underdelivers while their fanboys overhype and underwhelm.
Why people bash and complain?
Why people bash and complain? 650 dollars for card with water cooling solution, sounds great to me.
If you dont want water cooled card wait for Fury which will be cheaper for sure.
Yet everyone talk 980ti beat Fury X, how overhyped Fury X is and so on while Fury X is not even for public sale yet, its freaking 2 weeks old card without tuned drivers.
Get your (sheet) together already and wait before making conclusion ffs.
This nVidiaPerspective.com seems way too nvidia biased to me especialy in this article.