A fury unlike any other…
It’s finally time to talk specifics here people – can the new AMD Fury X rival the performance of the GeForce GTX 980 Ti?
Officially unveiled by AMD during E3 last week, we are finally ready to show you our review of the brand new Radeon R9 Fury X graphics card. Very few times has a product launch meant more to a company, and to its industry, than the Fury X does this summer. AMD has been lagging behind in the highest-tiers of the graphics card market for a full generation. They were depending on the 2-year-old Hawaii GPU to hold its own against a continuous barrage of products from NVIDIA. The R9 290X, despite using more power, was able to keep up through the GTX 700-series days, but the release of NVIDIA's Maxwell architecture forced AMD to move the R9 200-series parts into the sub-$350 field. This is well below the selling prices of NVIDIA's top cards.
The AMD Fury X hopes to change that with a price tag of $650 and a host of new features and performance capabilities. It aims to once again put AMD's Radeon line in the same discussion with enthusiasts as the GeForce series.
The Fury X is built on the new AMD Fiji GPU, an evolutionary part based on AMD's GCN (Graphics Core Next) architecture. This design adds a lot of compute horsepower (4,096 stream processors) and it also is the first consumer product to integrate HBM (High Bandwidth Memory) support with a 4096-bit memory bus!
Of course the question is: what does this mean for you, the gamer? Is it time to start making a place in your PC for the Fury X? Let's find out.
Recapping the Fiji GPU and High Bandwidth Memory
Because of AMD's trickled-out offense with the release of the Fury X, we already know much about the HBM design and the Fiji GPU. HBM is a fundamental shift in how memory is produced and utilized by a GPU. From our original editorial on HBM:
The first step in understanding HBM is to understand why it’s needed in the first place. Current GPUs, including the AMD Radeon R9 290X and the NVIDIA GeForce GTX 980, utilize a memory technology known as GDDR5. This architecture has scaled well over the past several GPU generations but we are starting to enter the world of diminishing returns. Balancing memory performance and power consumption is always a tough battle; just ask ARM about it. On the desktop component side we have much larger power envelopes to work inside but the power curve that GDDR5 is on will soon hit a wall, if you plot it far enough into the future. The result will be either drastically higher power consuming graphics cards or stalling performance improvements of the graphics market – something we have not really seen in its history.
Historically, when technology comes to an inflection point like this, we have seen the integration of technologies on the same piece of silicon. In 1989 we saw Intel move cache and floating point units onto the processor die, in 2003 AMD was the first to merge the north bridge and memory controller into a design, then graphics, the south bridge even voltage regulation – they all followed suit.
The answer for HBM is an interposer. The interposer is a piece of silicon that both the memory and processor reside on, allowing the DRAM to be in very close proximity to the GPU/CPU/APU without being on the same physical die. This close proximity allows for several very important characteristics that give HBM the advantages it has over GDDR5. First, this proximity allows for extremely wide communication bus widths. Rather than 32-bits per DRAM we are looking at 1024-bits for a stacked array of DRAM (more on that in minute). Being closer to the GPU also means the clocks that regulate data transfer between the memory and processor can be simplified, and slower, to save power and complication of design. As a result, the proximity of the memory means that the overall memory design and architecture can improve performance per watt to an impressive degree.
So now that we know what an interposer is and how it allows the HBM solution to exist today, what does the high bandwidth memory itself bring to the table? HBM is DRAM-based but was built with low power consumption and ultra wide bus widths in mind. The idea was to target a “wide and slow” architecture, one that scales up with high amounts of bandwidth and where latency wasn’t as big of a concern. (Interestingly, latency was improved in the design without intent.) The DRAM chips are stacked vertically, four high, with a logic die at the base. The DRAM die and logic die are connected to each other with through silicon vias, small holes drilled in the silicon that permit die to die communication at incredible speeds. Allyn taught us all about TSVs back in September of 2014 after a talk at IDF and if you are curious in how this magic happens, that story is worth reading.
The first iteration of HBM on the flagship AMD Radeon GPU will include four stacks of HBM, a total of 4GB of GPU memory. That should give us in the area of 500 GB/s of total bandwidth for the new AMD Fiji GPU; compare that to the R9 290X today at 320 GB/s and you’ll see a raw increase of around ~56%. Memory power efficiency improves at an even great rate: AMD claims that HBM will result in more than 35 GB/s of bandwidth per watt of power consumption by the memory system while GDDR5 only gets over 10 GB/s.
AMD has sold me on HBM for high end GPUs, I think that comes across in this story. I am excited to see what AMD has built around it and how this improves their competitive stance with NVIDIA. Don’t expect to see dramatic decreases in total power consumption with Fiji simply due to the move away from GDDR5, though every bit helps when you are trying to offer improved graphics performance per watt. How a 4GB limit to the memory system of a flagship card in 2015-2016 will pan out is still a question to be answered but the additional bandwidth it provides offers never before seen flexibility to the GPU and software developers.
And from Josh's recent Fiji GPU architectural overview:
AMD leveraged HBM to feed their latest monster GPU, but there is much more to it than memory bandwidth and more stream units.
HBM does require a new memory controller as compared to what was utilized with GDDR5. There are 8 new memory controllers on Fiji that interface directly with the HBM modules. These are supposedly more simple than what we have seen with GDDR5 due to not having to work at high frequencies. There is also the logic chips at the base of the stacked modules and the less exotic interface needed to address those units as again compared to GDDR5. The changes have resulted in higher bandwidth, lower latency, and lower power consumption as compared to previous units. It also likely means a smaller amount of die space needed for these units.
Fiji also improves upon what we first saw in Tonga. It can do as many theoretical primitives per clock (4) as Tonga, but AMD has improved the geometry engine so that the end result will be faster than what we have seen previously. It will have a per clock advantage over Tonga, but we have yet to see how much. It shares the 8 wide ACE (Asynchronous Compute Engine) that is very important in DX12 applications which can leverage them. The ACE units can dispatch a large amount of instructions that can be of multiple types and further leverage the parallelization of a GPU in that software environment.
The chips features 4 shader engines each with its own geometry processor (each processor improved from Tonga). Each shader engine features 16 compute units. Each CU again holds 4 x 16 vector units plus a single scalar unit. AMD categorizes this as a 4096 stream unit processor. The chip has the xDMA engine for bridgeless CrossFire, the TrueAudio engine for DSP accelerated 3D audio, and the latest VCE and UVD accelerators for video. Currently the video decode engine supports up to H.265, but does not handle VP9… yet.
In terms of stream units it is around 1.5X that of Hawaii. The expectation off the bat would be that the Fiji GPU will consume 1.5X the power of Hawaii. This, happily for consumers, is not the case. Tonga improved on power efficiency to a small degree with the GCN architecture, but it did not come close to matching what NVIDIA did with their Maxwell architecture. With Fiji it seems like AMD is very close to approaching Maxwell.
Fiji includes improved clock gating capabilities as compared to Tonga. This allows areas not in use to go to a near zero energy state. AMD also did some cross-pollination from their APU group with power flow. Voltage adaptive operations only apply the necessary voltage that is needed to complete the work for a specific unit. My guess is that there are hundreds, if not thousands, of individual sensors throughout the die that provide data to a central controller that handles voltage operations across the chip. It also figures out workloads so that it doesn’t overvolt a particular unit more than it needs to to complete the work.
The chip can dispatch 64 pixels per clock. This gets important for resolutions of 4K because those pixels need to be painted somehow. The chip includes 2 MB of L2 cache, which is double of the previous Hawaii. This goes back to the memory subsystem and 4 GB of memory. A larger L2 cache is extremely important for consistently accessed data for the compute units. It also helps tremendously in GPGPU applications.
Fiji is certainly an iteration of the previous GCN architecture. It does not add a tremendous amount of features to the line, but what it does add is quite important. HBM is the big story as well as the increased power efficiency of the chip. Combined this allows a nearly 600 sq mm chip with 4GB of HBM memory to exist at a 275 watt TDP that exceeds that of the NVIDIA Titan X by around 25 watts.
Now that you are educated on the primary changes brought forth by the Fiji architecture itself, let's look at the Fury X implementation.
AMD Radeon R9 Fury X Specifications
AMD has already announced that the flagship Radeon R9 Fury X is going to have some siblings in the not-too-distant future. That includes the R9 Fury (non-X) that partners will sell with air cooling as well as a dual-GPU variant that will surely be called the AMD Fury X2. But for today, the Fury X stands alone and has a very specific target market.
R9 Fury X | GTX 980 Ti | TITAN X | GTX 980 | TITAN Black | R9 290X | |
---|---|---|---|---|---|---|
GPU | Fiji | GM200 | GM200 | GM204 | GK110 | Hawaii XT |
GPU Cores | 4096 | 2816 | 3072 | 2048 | 2880 | 2816 |
Rated Clock | 1050 MHz | 1000 MHz | 1000 MHz | 1126 MHz | 889 MHz | 1000 MHz |
Texture Units | 256 | 176 | 192 | 128 | 240 | 176 |
ROP Units | 64 | 96 | 96 | 64 | 48 | 64 |
Memory | 4GB | 6GB | 12GB | 4GB | 6GB | 4GB |
Memory Clock | 500 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 5000 MHz |
Memory Interface | 4096-bit (HBM) | 384-bit | 384-bit | 256-bit | 384-bit | 512-bit |
Memory Bandwidth | 512 GB/s | 336 GB/s | 336 GB/s | 224 GB/s | 336 GB/s | 320 GB/s |
TDP | 275 watts | 250 watts | 250 watts | 165 watts | 250 watts | 290 watts |
Peak Compute | 8.60 TFLOPS | 5.63 TFLOPS | 6.14 TFLOPS | 4.61 TFLOPS | 5.1 TFLOPS | 5.63 TFLOPS |
Transistor Count | 8.9B | 8.0B | 8.0B | 5.2B | 7.1B | 6.2B |
Process Tech | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm |
MSRP (current) | $649 | $649 | $999 | $499 | $999 | $329 |
The most impressive specification that comes our way is the stream processor count, sitting at 4,096 for the Fury X, an increase of 45% when compared to the Hawaii GPU used in the R9 290X. Clock speeds didn't decrease either to get to this implementation which means that gaming performance has the chance to be substantially improved with Fiji. Peak compute capability jumps from 5.63 TFLOPS to an amazing 8.6 TFLOPS with Fiji, easily outpacing even the NVIDIA GeForce GTX Titan X rated at 6.14 TFLOPS.
Texture units also increased by the same 45% amount but there is a question on the ROP count. With only 64 render back ends present on Fiji, the same amount as the Hawaii XT GPU used on the R9 290X, the GPUs capability for final blending might be in question. It's possible that AMD feels that the ROP performance of Hawaii was overkill for the pixel processing capability it provided and thus thought the proper balance was found in preserving the 64 ROPs count on Fiji. I think we'll find some answers in our benchmarking and testing going forward.
With 4GB on board, a limitation of the current generation of HBM, the AMD Fury X stands against the GTX 980 Ti with 6GB and the Titan X with 12GB. Heck, even the new Radeon R9 390X and 390 ship with 8GB of memory. That presents another potential problem for AMD's Fiji GPU: will the memory bandwidth and driver improvements made be enough to counter the smaller frame buffer size of Fury X compared to its competitors? AMD is well aware of this but believes that a combination of the faster memory interface and "tuning every game" to ensure that the 4GB memory limit will prevent the bottleneck. AMD noted that the GPU driver is what is responsible for memory allocation and technologies like memory compression and caching can drastically impact memory footprints.
While I agree that the HBM implementation should help things, I don't think it's automatic; GDDR5 and HBM don't differ by that much in net bandwidth or latency. And while tuning for each game will definitely be important, that puts a lot of pressure on AMD's driver and developer relations teams to get things right on day one of every game's release.
At 512 GB/s, the AMD Fury X exceeds the available bandwidth of the GTX 980 Ti by 52%, even with a rated memory clock speed of just 500 MHz. That added memory performance should allow AMD to be more flexibile with memory allocation, but drivers will definitely have to be Fiji-aware to change how it brings in data to the system.
Fury X's TDP of just 275 watts, 15 watts lower than the Radeon R9 290X, says a lot for the improvement in efficiency that Fiji offers over Hawaii. However, the GTX 980 Ti still runs at a lower 250 watts; I'll be curious to see how this is reflected in our power testing later.
Just as we have seen with NVIDIA's Maxwell design, the 28nm process is being stretched to its limits with Fiji. A chip with 8.9 billion transistors is no small feat, running past the GM200 by nearly a billion (and even that was astonishing when it launched).
For god sakes, some games
For god sakes, some games barely play 1080 in max settings….where is 1080 benchmark again. There is absulutely no point to just run after bigger display with every new graphicscard when solid 60 fps cant be quaranteed with every game in max.
Tell us about 1080 60fps 13
Tell us about 1080 60fps 13 or 14 more times and I swear I will flip out!
Excuse me Ryan: I hope you do
Excuse me Ryan: I hope you do a 390x review soon?
AMD has not seemed eager to
AMD has not seemed eager to send these out for review. But I'll get my hands on one!
390x is an overclocked 290x
390x is an overclocked 290x so performance of those cards is only about 10% higher.
Now that the reviews are out
Now that the reviews are out and it is indeed so very close to the 980 Ti, it does make me stop and think. Did Nvidia engage in some corporate espionage here? Why else would the company introduce a part that undercuts their own higher priced Titan unless they were already expecting the Fury and attempting to dampen the reaction…
Unlikely :)Nvidia has just
Unlikely :)Nvidia has just been over charging for the Titan X and had enough wiggle room to undercut it with a $650 almost equal version. Besides AMD set the price based on the 980ti.
It compares pretty well. My issue with AMD is that on paper this card should be beating the 980ti…Something fishy with the drivers has to be a factor. Right?
They both use the same
They both use the same suppliers, have the same board partners, and many employees know one another. There aren't a lot of secrets in the industry that stay secret for long.
I am quite sure that NVIDIA
I am quite sure that NVIDIA was doing some research to figure out what AMD might release. In actuality I think that NVIDIA would have aimed to run at just UNDER the performance of the Fury X at the same price if they'd had their choice, so I am guessing NVIDIA over estimated the perf that Fiji brought to the table.
Interesting discussion for the podcast maybe.
Linus totally called this
Linus totally called this one. He said that Nvidia knew the performance of the Fury X far in advance. The 980TI and 980 price drops were not accidental releases/price drops that happened to hurt the Fury X in all the right ways. Although, it’s very likely that AMD knew what Nvidia was up to as well. Cards don’t just magically fall into those price points. It was pre-planned. So people wondered what Nvidia will do with the prices of 980ti now the Fury is out, the answer I believe is nothing: The release of the 980ti is already the response.
I’m sure AMD had a rather
I’m sure AMD had a rather unhappy day when they found out about the 980 Ti’s pricing. (much like they were probably quite upbeat after Titan X)
Current Fury X pricing seems to indicate that they’re trying to ride it out for the moment, but with custom Tis coming to market, I can’t imagine it sticking it 650 unless supply is severely constrained.
nvidia has such a large gap
nvidia has such a large gap in market share they know they could take a lose in 1 card. They have win in just about every other card right now as most their main line up is A New chip not an old one renamed to look new.
I suspected this when the 980
I suspected this when the 980 Ti so closely matched the Titan X. I suspect the 980 Ti would have been cut down more significantly without the competition from Fury X. If the 980 Ti had been cut down more significantly, then Fury X would have been competitive with a $1000 Titan X, rather than a $650 980 Ti. Nvidia not only did not cut it down much, they also bumped up the clock compared to the Titan X. If Nvidia can supply the demand for the 980 Ti, then it is acceptable; it seems to be in stock. I have to wonder if Nvidia will release another card though. It seems like they would have GPUs which have defects preventing them from being sold in 980 Ti cards, but would still be a good product.
Ryan,
the new AMD product
Ryan,
the new AMD product seems promising and still in it’s early stages. There’s room for improvements.
Comparing the specs with it’s competitor, have you attempted to overclock both cards to see if there’s any comparable improvements vs the competitor?
It appears that the memory frequency on the Fury X is pretty low. Not sure if that’s a typo. They do have the bandwidth capability, but may be lacking on the frequency which can be noticed in some games.
The frequency is correct,
The frequency is correct, they get their bandwidth from the 4096 memory bus. Remember the memory and the chip are much closer together, so when you raise the speed of one, it heats up both of the parts. They also probably don’t have the third party drivers that would allow them to boost the voltage of the cards, which can limit the boost they can get.
As I said, still an early
As I said, still an early product. Maybe they’ll be third party drivers that can provide some improvements as you mention. But I agree in regards to the additional challenges due to short distance between units.
Ryan, went in more details during the podcast in regards to the overclocking.
I am surprised you didn’t use
I am surprised you didn’t use Witcher 3 as a test game. It definitely is a good benchmark for newer cards.
Witcher 3 doesnt have an
Witcher 3 doesnt have an actual benchmark for it though.
What about benchmarking it by
What about benchmarking it by playing a section of the game? I thought that was the default way to benchmark.
But it isnt exact or
But it isnt exact or repeatable. Especially with wandering AI and time of day etc etc. I get what you are saying but it wouldnt be accurate unless you did the same motion like 50 times for each GPU in roughly the same area and compared averages. In Witcher 3 just looking in a different direction sometimes changes my FPS by 10!
Its how they do it in crysis
Its how they do it in crysis 3, there is a set save their load and run through to a certain spot. There is gonna be difference each time yes, but that is why you run the benchmark a few times and make an avg. Its about only way to do it.
It’s a little disappointing
It’s a little disappointing seeing the Fury X lose marginally to the 980 Ti at the same price point. However, I think the real card we have to wait for is the Fury. If the Fury card that comes out later this year performs 10-15% with $100 off of its price point, that will probably be the card to get.
When Nvidia released the 980 and the 970, the 970 was an amazing buy up until the whole 3.5GB memory issue came to light. If AMD can avoid that with the release of the air cooled Fury card, consumers can probably take better advantage of the HBM with their own water cooling solutions.
There’s an interesting
There’s an interesting difference between the Fraps FPS and Observed FPS for 295×2 on Skyrim @ 4k. o.o
Yeah, AMD never fixed DX9
Yeah, AMD never fixed DX9 frame pacing…
Yeah… That ruled out
Yeah… That ruled out Crossfire for me entirely. Not working with Skyrim is unacceptable. I’m also worried about the 4gb vram for Skyrim as well, and it seems to be chugging a bit on GTA V at 4k. If there is a voltage unlock coming I hope we get it soon because as it stands the 980 ti will overclock better.
Some guys on OCN are pointing out some hot VRM temperatures on the back of the card. I dunno if it’s a problem or not. (The backplate remains cool but under it the VRMs are supposed to be really hot. The plate isn’t touching the VRM and there’s just air in there being an insulation more than anything, or so it is claimed.)
they fixed the crossfire
they fixed the crossfire frame pacing on DX9 for lower resolution, just not for 4k/eyefinity I think
if you compare your tests from before they fixed anything (when FCAT was new) and now the DX9 1080P CF results were fixed, I think
Free Canadian bacon with
Free Canadian bacon with every Furty X!
I’m super impressed with the
I’m super impressed with the improvements in power consumption, but that’s the only thing I’m impressed with.
Performance trails the 980 ti – I don’t know what AMD was thinking pricing this the same as the 980 ti as the underdog. No DVI, no HDMI 2.0, do not want.
If this was priced at $550, this would be a solid release.
Hopefully the Fury without
Hopefully the Fury without the water cooling will have the same chip and improved driver optimization when it does launch to make it a more compelling option. Personally, I don’t care about the HDMI support, but I’m running a Korean monitor and I need my DVI-D. This and some other things is pushing me towards the 980ti TBH.
Power draw is improved but a
Power draw is improved but a lot of that could be mostly due to water cooler. Can see how when you keep the gpu are a very cool temp lowers power draw cause less leaks in the 295×2. So I would guess some of the draw savings is due to that. We will know for sure when the non-water cooled one comes out.
Fury X got REKT!
Fury X got REKT!
Not quite rekt; it’s no
Not quite rekt; it’s no Bulldozer. More like the old comparison between the 290x and 780ti (which, interestingly enough, has shifted heavily in the 290x’s favor with current drivers and games – I wonder if the same will happen with the Fury and 980ti).
They definitely hyped it too much, but it’s really not a bad card by any means. The biggest surprise is how much higher the Fury is in FLOPS than the 980ti, yet delivers similar or slightly less performance. Drivers? Memory limitations? Tesselation?
And Nvidia is increasingly
And Nvidia is increasingly segmenting their gaming SKUs from their accelerator SKUs, at least with AMD some number crunching advantages can be had for around the same price point. I see where Nvidia in getting the power savings from by stripping out the Flops/FP capabilities! So AMD’s product provides more computational performance, should the newer gaming engines need it for physics and other enhancements, AMDs GPU product with AMDs continued internal improvements with Mantle, and Mantel’s improvements quickly provided downstream to Khronos and Vulkan(the public facing version of most of Mantle’s and other’s API contributions) will allow for the Fiji the same improvements over time.
Really the jury is still out on Fury X, and its derivatives until more complete testing on the newer Graphics APIs. And what about AMD’s continued internal Mantle developments that will make their way into the software stacks of the gaming engines, and games, through sharing with M$ for DX12, and Khronos with Vulkan, and any special development sharing of Mantle with specific Games makers for their products. The gaming comparisons alone are not enough at this early stage to totally dismiss AMD’s competing products, this is just a first match in a series, and hopefully the prices will get better on both sides, so the consumer wins.
If you look at 680/780ti/980
If you look at 680/780ti/980 vs the competeing AMD part generally the GFLOPS has been in favor of AMD by a bit most the time. But a lot of games that doesn’t matter a bit.
A bit, sure, but I don’t
A bit, sure, but I don’t think the gap was ever this big. Makes me more hopeful about future driver improvements since there’s so much raw power there.
Games not as much, but other
Games not as much, but other uses, and some non gaming graphics usage, that extra processing power comes in handy. With Blender 3d getting support for cycles rendering on AMD GPUs, things are about to change for low cost 3d graphics projects, especially with the costs of the professional GPU cards that most independent users can not readily afford. I’m looking at the future dual fury based SKUs, as well as what pricing that may happen around a Professional/HPC APU workstation variant that AMD has in the works based on the Zen CPU cores and Greenland graphics that shares HBM on a interposer, for an APU type workstation system on a interposer.
Nvidia’s pricing is way beyond what Abdul Jabbar could reach with a rocket assisted skyhook for users that need the all those GFLOPS without the drained bank accounts.
It was Fury that brought on that lower pricing, and Fury does not have bit-coin mining to keep the costs high, in fact I see a relatively quick price drop on the Fury SKU, more so than with the previous generation.
A price war is about the look of where things are heading more so from AMD’s side that needs to get a larger market share and the extra revenues that come with more market share at the expense of large profits. Large sales volumes(revenues) and more market share can make up for some lesser profit margins and produce a better economy of scale for AMD with its suppliers, those larger volumes of unit sales can get larger materials bulk savings from AMDs suppliers, and that will eventually bring down the costs of HBM in line with, and even better, than GDDR5.
Seems like bulldozer to
Seems like bulldozer to excavator again. Hbm slapped on a larger but failing architecture based core. Doesnt matter how fast the memory is if you cant get the core right. Might expkain the use of the interposer. I dont think this is the card AMD intended to release but perhaps the 20nm failure forced the issue and placing hbm on an old core was the only choice.
The similarity to the cpu situation is uncanny I feel.
So much paid shill liar BS in
So much paid shill liar BS in this “review” article, that it’s outright laughable. More than 80% of all most respected and most accurate hardware reviewing sources out there made clear reports that stock Fury X beats the living SHIT out of 980 Ti in 8 cases out of 10 (and even manages to beat Titanic X in 6 gaming tests out of 10 while not even being overclocked), losing noticeably to 980 Ti and Titanic X only in heavily Nvidia-biased and GayWorks-gimped titles.
PcPer is pretty much done for, at least for me personally. I prefer my resources to be accurate, unbiased and truthful to the very end. This here “article” has clearly shown to me that PcPer is a no-good source for hardware testing reviews AT THE VERY LEAST.
Cool story bro! Tell me more!
Cool story bro! Tell me more!
Ryan seemed to like it quite
Ryan seemed to like it quite a bit.
If you’re going to refer to “most accurate hardware reviewing sources out there”, maybe you could provide a link?
Wait what?! The only review
Wait what?! The only review site I have seen so far that shows the Furyx to do somewhat better than the 980ti was tomshardware. EVERYWHERE else shows it being slower.
In fact PCper was more generous than I thought they would be about this card.
Look, I am pissed right there with you that I wish it surpassed both the titan and 980ti but with the current drivers it just isn’t so bud.
Here you go Ching:
http://www.pcgamer.com/amd-radeon-r9-fury-x-tested-not-quite-a-980-ti-killer/
http://www.ign.com/articles/2015/06/24/amd-radeon-r9-fury-x-review
http://hardocp.com/article/2015/06/24/amd_radeon_r9_fury_x_video_card_review/11#.VYqy-0ay58E
http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x,4196-9.html#react4196
Dude you need to start taking
Dude you need to start taking your medication again! This is a good fair unbiased review, most of us will be really glad if you never come here again, this website doesn’t need trolls like you to complain about free reviews.
It’s not really free if there
It’s not really free if there are ads
Gee, thanks, jackass.
Gee, thanks, jackass.
…wasn’t Fury X presented as
…wasn’t Fury X presented as a overclockers card !?!?!
What’s up with its pathetic overclockability !?!?!?!
:-(((
No voltage control or memory
No voltage control or memory overclocking yet. Early overclocking efforts with AMD overdrive have never been particularly good.
Pretty impressive how the
Pretty impressive how the Fury does better and better as resolutions increase despite the supposedly inadequate 4gb capacity.
Great advances with power consumption as well, and very cool and quiet.
Performance may not have lived up to the hype, but there’s still a lot to like about it, and it is competitive at $650.
I liked the review, I think
I liked the review, I think you were very fair towards AMD, I hope they do well with the Fury X, and that the Fury air cooled will be a good card, I just don’t want Nvidia run away with everything, I hope they get some great drivers for there cards and do some great pricing, it could make the different. 🙂
Great Article Ryan.
As it is
Great Article Ryan.
As it is not as good as I expected I can not complain.
Since it is new technology with HBM it does a pretty damn good job.
I am sure that with new drivers this card will keep getting better.
I am curious about the x2 as it will have 2x4GB. I would not be surprised to see it go over twice as fast as the regular fury since some games like GTA V need more that the available 4GB.
It’s crossfire so you still
It’s crossfire so you still only have 4gb available. It’s only when its a mantel or dx12 game that doesn’t use alternating frames where it can be an actual 8gb of available memory.
people claim Pcper is biased
people claim Pcper is biased but this was very balanced
as for the product, some mixed feelings, performance is up there, power usage in control, new tech (HBM) and the WC are exciting, the whole premium branding a feel is nice…
but… it’s impossible to ignore the disadvantages, with HDMI, memory size, practicality (big radiator to deal with), poor OC (this should place the 980 ti as a comfortable winner on performance) and lack of love from gameworks and some devs
if you exclude the 980 ti this would have been great, but Nvidia is to clever and knew exactly how to deal with it… the 980 ti was and is the winner…
Well I think if 980ti wasn’t
Well I think if 980ti wasn’t out or price even rumored i would bet Fury would been priced a bit higher, around the 850$ it was rumored. It would been a win for AMD as its a card that matchs the titan performance wise and is 150$ cheaper though 4gb vs 12gb is where diff comes in. But it likely is Nvidia that made it so this card is priced as low as it is.
Ryan
Ryan Shrout…
http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x,4196-5.html
http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x,4196-4.html
http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x,4196-6.html
Here the “picture” is slightly different…
Any comments ?
First image: I show nearly
First image: I show nearly identical results, with the Fury X winning Metro: LL
Second image: BF4 and GTA V show similar loses for the Fury X. I don't test FC4.
Third image: Cool, I guess.
Don't just read our review, or Tom's or any one person's. Read TR, HWC, PCPer, etc. and then make a decision. I have faith that you can handle the critical thinking aspect.
If I was on the market to buy
If I was on the market to buy a card I would buy the Fury X… in fact I would buy two (to replace my 290X twins)… that would be a no brainer.
…but that’s not the (my) point…
The point is that it’s confusing and annoying to see reviews where the Fury X is consistently slower than the 980ti and other reviews where the Fury X is faster than the Titan X… I repeat: the TITAN X !!!
You mean one site (and the
You mean one site (and the AMD PR slide) against the rest of the reviews that have gone up today? I guess that's confusing if you are so hung up on that one site (or on the AMD PR slide).
It’s not just that one site
It’s not just that one site and that AMD PR slide… I have seen other contradictory numbers here and there on the Net… but I imagine some might be faked… :-p
In regards of being “so hung up” and to be perfectly clear… I am so “so hung up” on AMD as you are on nVIDIA.
It’s not just that one site
It’s not just that one site and that AMD PR slide… I have seen other contradictory numbers here and there on the Net… but I imagine some might be faked… :-p
In regards of being “so hung up” and to be perfectly clear… I am so “so hung up” on AMD as you are on nVIDIA.
Enjoy that second Fury X
Enjoy that second Fury X sitting in your case not being used at all as they hardly have Xfire profiles lol. Good waste of money there stupid!
Hey “Anonymous”… you are as
Hey “Anonymous”… you are as coward as you are misinformed as you lack education and manners.
Idiot, Fury X got beaten left
Idiot, Fury X got beaten left right and center. Even overclock vs. overclock
http://www.purepc.pl/karty_graficzne/premierowy_test_amd_radeon_r9_fury_x_vs_geforce_gtx_980_ti?page=0,15
Any enthusiast paying 650 for a high end card will overclock it like mad and this just shows how piss poor the Fury X performs & overclocks compared to it’s competition.
You can’t arbitrarily compare
You can't arbitrarily compare benchmark results without considering the settings used for each game. How is it that the same card can have different results? Different game settings. Plus, Tom's uses a combination of some "custom" benchmarks, in-game benchmarks (with FRAPS results), and these results will not be the same as the runs in other reviews. Oh, and check the Tom's test setup page for the inconsistent mix of driver versions used for their tests.
Just a quick example from your first link:
Tom's settings from their "how we tested…" page: "Metro Last Light
Built-in benchmark, 145-sec Fraps, Very High preset, 16x AF, Normal motion blur"
Ryan's settings from the review: Built-in benchmark (average of 3 complete runs, not a 145-sec FRAPS result), Very High preset, 16x AF, Normal motion blur, Normal Tesselation. (Gee, wonder why results might be different if we are looking at a much more demanding workload on the card?)
It's wise to consider the facts when parsing review data. There's a reason everyone has to reveal testing methodology. It makes a big difference. Go ahead and compare the other results you linked and you'll see variance – and you'll see that the Tom's review is an inconsistent mess.
It also explains the
It also explains the differences in power consumption seen between both reviews.
I was wondering which of your
I was wondering which of your titles used for testing would demonstrate the memory the most. I thought RTW 2 or another RTS game that uses a lot of discrete objects would prove out the memory more. From what I can tell, this review is heavy on First Person Shooters/RP.
Nice review. I was hoping for a little more revolutionary result.
It seems not that bad
It seems not that bad strategy for cash strapped AMD to release something new (HBM) and different (water colled) now and most likely re-release the same product (with ~200mm2 GPU at 14/16nm) next year as a decent mid range card.
It makes me wonder if Nvida will really have the comfort to skip version one of HBM.
And AMD already has most of
And AMD already has most of the HBM/Future HBM technology engineering done, and there will be very little reengineering work necessary to accept HBM2 as soon as it becomes available. Those interposers are already pretty much engineered to take the larger HBM2 stacks, the memory traces to each stack will probably remain the same at 1024 bits wide to each individual stack so that’s 4096 total traces wide going to the 4 memory stacks, so there is plenty of traces for HBM2, which will be doubling the memory per stack. AMD is ahead of the game memory technology wise, and they could very well get HBM2 into a Fiji update in very short order once HBM2 dies become available. AMD does not have to wait for its next microarchitecture update to revise the memory on its interposer with the latest HBM, while Nvidia still is in the process of certifying HBM2 to work with its future GPUs. If you look at the HBM stacking diagrams the bottom 5th element of the HBM stack(logic die) will probably have some differing circuitry provided by the supplier of the HBM to work with any HBM2 differences, and very little if any reworking of Fiji’s on die memory controller if any will be needed to accept HBM2, and it was probably intentionally engineered that way by AMD and its HBM memory partner. AMD is definitely ahead of the game with HBM integration technology, and will have that advantage in getting HBM2 enabled Fiji/other HBM enabled SKU revisions to market also. AMD should be working on getting HBM/HBM2 into its lower tier SKU derivatives and they will be to market before Nvidia get there with any HBM2 enabled SKUs.