Too much power to the people?
Does AMD have a problem on its hands with the power consumption of the new Radeon RX 480?
UPDATE (7/1/16): I have added a third page to this story that looks at the power consumption and power draw of the ASUS GeForce GTX 960 Strix card. This card was pointed out by many readers on our site and on reddit as having the same problem as the Radeon RX 480. As it turns out…not so much. Check it out!
UPDATE 2 (7/2/16): We have an official statement from AMD this morning.
As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).
Honestly, that doesn't tell us much. And AMD appears to be deflecting slightly by using words like "some RX 480 boards". I don't believe this is limited to a subset of cards, or review samples only. AMD does indicate that the 8 Gbps memory on the 8GB variant might be partially to blame – which is an interesting correlation to test out later. The company does promise a fix for the problem via a driver update on Tuesday – we'll be sure to give that a test and see what changes are measured in both performance and in power consumption.
The launch of the AMD Radeon RX 480 has generally been considered a success. Our review of the new reference card shows impressive gains in architectural efficiency, improved positioning against NVIDIA’s competing parts in the same price range as well as VR-ready gaming performance starting at $199 for the 4GB model. AMD has every right to be proud of the new product and should have this lone position until the GeForce product line brings a Pascal card down into the same price category.
If you read carefully through my review, there was some interesting data that cropped up around the power consumption and delivery on the new RX 480. Looking at our power consumption numbers, measured directly from the card, not from the wall, it was using slightly more than the 150 watt TDP it was advertised as. This was done at 1920×1080 and tested in both Rise of the Tomb Raider and The Witcher 3.
When overclocked, the results were even higher, approaching the 200 watt mark in Rise of the Tomb Raider!
A portion of the review over at Tom’s Hardware produced similar results but detailed the power consumption from the motherboard PCI Express connection versus the power provided by the 6-pin PCIe power cable. There has been a considerable amount of discussion in the community about the amount of power the RX 480 draws through the motherboard, whether it is out of spec and what kind of impact it might have on the stability or life of the PC the RX 480 is installed in.
As it turns out, we have the ability to measure the exact same kind of data, albeit through a different method than Tom’s, and wanted to see if the result we saw broke down in the same way.
Our Testing Methods
This is a complex topic so it makes sense to detail the methodology of our advanced power testing capability up front.
How do we do it? Simple in theory but surprisingly difficult in practice, we are intercepting the power being sent through the PCI Express bus as well as the ATX power connectors before they go to the graphics card and are directly measuring power draw with a 10 kHz DAQ (data acquisition) device. A huge thanks goes to Allyn for getting the setup up and running. We built a PCI Express bridge that is tapped to measure both 12V and 3.3V power and built some Corsair power cables that measure the 12V coming through those as well.
The result is data that looks like this.
What you are looking at here is the power measured from the GTX 1080. From time 0 to time 8 seconds or so, the system is idle, from 8 seconds to about 18 seconds Steam is starting up the title. From 18-26 seconds the game is at the menus, we load the game from 26-39 seconds and then we play through our benchmark run after that.
There are four lines drawn in the graph, the 12V and 3.3V results are from the PCI Express bus interface, while the one labeled PCIE is from the PCIE power connection from the power supply to the card. We have the ability to measure two power inputs there but because the GTX 1080 only uses a single 8-pin connector, there is only one shown here. Finally, the blue line is labeled total and is simply that: a total of the other measurements to get combined power draw and usage by the graphics card in question.
From this we can see a couple of interesting data points. First, the idle power of the GTX 1080 Founders Edition is only about 7.5 watts. Second, under a gaming load of Rise of the Tomb Raider, the card is pulling about 165-170 watts on average, though there are plenty of intermittent, spikes. Keep in mind we are sampling the power at 1000/s so this kind of behavior is more or less expected.
Different games and applications impose different loads on the GPU and can cause it to draw drastically different power. Even if a game runs slowly, it may not be drawing maximum power from the card if a certain system on the GPU (memory, shaders, ROPs) is bottlenecking other systems.
One interesting note on our data compared to what Tom’s Hardware presents – we are using a second order low pass filter to smooth out the data to make it more readable and more indicative of how power draw is handled by the components on the PCB. Tom’s story reported “maximum” power draw at 300 watts for the RX 480 and while that is technically accurate, those figures represent instantaneous power draw. That is interesting data in some circumstances, and may actually indicate other potential issues with excessively noisy power circuitry, but to us, it makes more sense to sample data at a high rate (10 kHz) but to filter it and present it more readable way that better meshes with the continuous power delivery capabilities of the system.
Image source: E2E Texas Instruments
An example of instantaneous voltage spikes on power supply phase changes
Some gamers have expressed concern over that “maximum” power draw of 300 watts on the RX 480 that Tom’s Hardware reported. While that power measurement is technically accurate, it doesn’t represent the continuous power draw of the hardware. Instead, that measure is a result of a high frequency data acquisition system that may take a reading at the exact moment that a power phase on the card switches. Any DC switching power supply that is riding close to a certain power level is going to exceed that on the leading edges of phase switches for some minute amount of time. This is another reason why our low pass filter on power data can help represent real-world power consumption accurately. That doesn’t mean the spikes they measure are not a potential cause for concern, that’s just not what we are focused on with our testing.
Setting up the Specification
Understanding complex specifications like PCI Express can be difficult, even for those of us working on hardware evaluation every day. Doing some digging, we were able to find a table that breaks things down for us.
We are dealing with high power PCI Express devices so we are only directly concerned with the far right column of data. For a rated 75 watt PCI Express slot, power consumption and current draw is broken down into two categories: +12V and +3.3V. The +3.3V line has a voltage tolerance of +/- 9% (3.03V – 3.597V) and has a 3A maximum current draw. Taking the voltage at the nominal 3.3V level, that results in a maximum power draw of 9.9 watts.
The +12V rail has a tolerance of +/- 8% (11.04V – 12.96V) and a maximum current draw of 5.5A, resulting in peak +12V power draw of 66 watts. The total for both +12V and +3.3V rails is 75.9 watts but noting from footer 4 at the bottom of the graph, the total should never exceed 75 watts, with either rail not extending past their current draw maximums.
Diving into the current
Let’s take a look at the data generated through our power testing and step through the information, piece by piece, so we can all understand what is going on. The graphs built by LabVIEW SignalExpress have a habit of switching around the colors of data points, so pay attention to the keys for each image.
Rise of the Tomb Raider (1080p) power draw, RX 480, Click to Enlarge
This graph shows Rise of the Tomb Raider running at 1080p. The yellow line up top is the total combined power consumption (in watts) calculated by adding up the power (12V and 3.3V) from the motherboard PCIe slot and the 6-pin PCIe power cable (12V). The line is hovering right at 150 watts, though we definitely see some spiking above that to 160 watts with an odd hit above 165 watts.
There is a nearly even split between the power draw of the 6-pin power connector and the motherboard PCIe connection. The blue line shows slightly higher power draw of the PCIe power cable (which is forgivable, as PSU 6-pin and 8-pin supplies are generally over-built) while the white line is the wattage drawn from the motherboard directly.
Below that is the red line for 3.3V power (only around 4-5 watts generally) and the green line (not used, only when the GPU has two 6/8-pin power connections).
Rise of the Tomb Raider (1080p) power draw, RX 480, Click to Enlarge
In this shot, we are using the same data but zooming on a section towards the beginning. It is easier to see our power consumption results, with the highest spike on total power nearly reaching the 170-watt mark. Keep in mind this is NOT with any kind of overclocking applied – everything is running at stock here. The blue line hits 85 watts and the white line (motherboard power) hits nearly 80 watts. PCI Express specifications state that the +12V power output through a motherboard connection shouldn’t exceed 66 watts (actually it is based on current, more on that later). Clearly, the RX 480 is beyond the edge of these limits but not to a degree where we would be concerned.
The Witcher 3 (1080p) power draw, RX 480, Click to Enlarge
The second game I tested before the controversy blew up was The Witcher 3, and in my testing this was a bigger draw on power than Rise of the Tomb Raider. When playing the game at 1080p it was averaging 155+ watts towards the end of the benchmark run and spiking to nearly 165 watts in a couple of instances.
The Witcher 3 (1080p) power draw, RX 480, Click to Enlarge
Zooming in a bit on the data we get more detail on the individual power draw from the motherboard and the PCIe 6-pin cable. The white line of the MB +12V power is going over 75 watts, but not dramatically so, while the +3.3V power is hovering just under 5 watts, for a total of ~80 watts. Power over the 6-pin connector goes above 80 watts here as well.
driver fix
driver fix incoming……everyone please wipe your asses and put your pants back on.
If you already got an nvidia tattoo on your face…im sorry. Hopefully it matches the intel logo on your arm.
Problem is word of mouth
I
Problem is word of mouth
I saw many posts on gaming forums already saying don’t buy rx480, it’ll fry you’re board
It’s that kind of thing a fix will not make up for
Thank for the digestible
Thank for the digestible testing guys. I’m still confused about why the power draw from the slot and the 6 pin seemed to have been mirroring each other, am curious it that was just this instance, a design flaw, or an error in the bios.
Look forward to coverage testing and confirmation of the patch AMD is hoping to roll out.
Thanks for keepen us posted 😀
The reason why the two
The reason why the two current lines are exactly the same is because AMD made a the RX 480 card unlike any in history. AMD tried to make the card look like it was drawing less power so instead of pulling most of the power from the 6 pin connector, it split the boards 6 power VRMs in two. Half of the VRMs are powered by the soldered together 12V lines on the 6 pin connector (making it really an 8 pin connector)while also soldering together the grounds (including the sense line- another major problem). The other three VRMs pull power only from the PCI-E slot. The cards power load comes from both so the current and wattage draw are the same for the 6 pin connector and the PCI-E slot
Unfortunately, this is a very bad mistake on AMD’s part as the PCI-E slot is only rated for 66 watts on the 12 volt pins. With a TDP of 150 watts, that means the RX 480 card is already out of spec for the PCI-E AT THE CARD’S RATED TDP. Since this card is also power starved, any demanding game will pull the card into 165 Watts or so at stock clocks. When overclocked, the card draw can be as high as 195 Watts. So the PCI spec is over by 10% when running normally, 27% at stock on a demanding game, and a whopping 50% out of spec when overclocked.
AMD has misrepresented the card to the PCI approval committee which could lead to fines or the card not approved for sale. The RX 480 could also eventually hurt your motherboard as the slot pins and traces are rated for much less current than the card is pulling.
Well that does it I’ll wait
Well that does it I’ll wait for the sapphire nitro custom board. I’ve got a feeling the only way to fix this in software is to throttle the cards clockspeed and or voltage in the high power state and reduce performance and thereby consumption on reference cards.
This is my educated guess of what amd will do. I do not claim to be an engineer. I’m a math guy..
Thank for explaining Mr. John Pombrio.
Sapphire is making a custom pcb with an eight pin.. I think they will solder the traces right..
AMD sent the card/card
AMD sent the card/card samples to an independent testing lab for the PCI certification testing! And that lab has to report the results to PCI-SIG, so AMD has no influence over the certification process and AMD is out of the loop with respect to the independent lab’s report. The independent testing lab has to do the testing and present its results to PCI-SIG and in that process AMD is most definitely out of the loop and can see no results other than what PCI-SIG makes known in the PCI-SIG decision process.
Any competent testing lab is not going to risk their independent certification credentials on any product that may have been improperly engineered, so that Lab probably let PCI-SIG know of any problems in their report, so it’s going to be On PCI-SIG to answer for any final approval on the independent testing labs results. And you can be damn sure that the independent testing lab has the proper testing Mule for any PCI related testing of GPU cards.
VRMs on GPUs
VRMs on GPUs Explained:
“(Tutorial) Graphics Cards Voltage Regulator Modules (VRM) Explained”
http://www.geeks3d.com/20100504/tutorial-graphics-cards-voltage-regulator-modules-vrm-explained/
Also intresting project:
“Complete Disassembly of RX 480 – The Road to DIY RX 480 Hybrid”
http://www.gamersnexus.net/guides/2498-complete-disassembly-of-rx-480-and-road-to-diy-hybrid
I’m not generally one for the
I’m not generally one for the looks of components, but that gamernexus hybrid was fairly horrific. There’s a lot of room to improve with a more specialized kit. Hopefully EK’s up and coming kit isn’t a run on the bank, and that other “non-vise clamp/hovering fan on a stick” options present themselves.
Paid fanboys pretended that
Paid fanboys pretended that the RX480 issue doesn’t exist, and it was all media invention.
Latter they pretended that the Nvidia GTX480 violated the spec also and that the media didn’t report because is biased.
When proven wrong. They pretended that the Nvidia 750Ti violated the spec also and that the media didn’t report because is biased.
When proven wrong. They pretended that the Nvidia GTX480 violated the spec also and that the media didn’t report because is biased.
When proven wrong. They pretended that the Asus GTX960 Stix violated the spec also and that the media didn’t report because is biased.
Now that they are also proven to be wrong, they start pretending that the GTX950 violated the spec…
When will stop this?
Here is my suggestion to RTG
Here is my suggestion to RTG over at their forum. Think I may have gotten to Raja, you should expect a call to hammer out the details going forward 😉
https://community.amd.com/thread/202526
This fanboys solution hasn’t
This fanboys solution hasn’t even rated an answer or reply yet. Clickbait.
ITS GONNA POP
ITS GONNA POP
Allyn, Ryan, Josh, Jeremy…
Allyn, Ryan, Josh, Jeremy… team.
How much is the maximum power draw of the fan of the reference card?
Would it be possible to make a bypass/extension from the fan power cord to connect to one of the motherbord’s pwm fan connectors instead of the card’s PCB one?
If also a motherboard’s BIOS update would be applied to read the temps from the GPU’s diode and control this motherboard’s pwm connector, wouldn’t this be a way to avoid an eventual downgrade of the specs via a driver fix? Or a complementary/optional aid to it?
At least, it would be a non-expensive “hardware” solution.
Could you guys please check the power draw of the stock fan?
Just a thought.
We all need AMD.
From Portugal, with respect.
Well that does it I’ll wait
Well that does it I’ll wait for the sapphire nitro custom board. I’ve got a feeling the only way to fix this in software is to throttle the cards clockspeed and or voltage in the high power state and reduce performance and thereby consumption on reference cards.
This is my educated guess of what amd will do. I do not claim to be an engineer. I’m a math guy..
Thank for explaining Mr. John Pombrio.
Sapphire is making a custom pcb with an eight pin.. I think they will solder the traces right..
In-depth analysis of
In-depth analysis of 1070/1080 PCBs and RX480 PCB.
FE 10[78]0: https://www.youtube.com/watch?v=OsWJLKlDFCQ
Ref. RX480: https://www.youtube.com/watch?v=qG2e-v94L4M
Nothing to hide.
Very good channel btw.
Outdated, watch his twitch
Outdated, watch his twitch stream where he actually verifies what is attached to what.
The 6-pin is out of spec. missing sense pin. It’s actually electrically setup to allow 8-pin power(3 12v and 3 ground).
3 VRMS are attached to 6-pin and 3 VRMs are attached to PCI-E bus. Totally isolated from each other. Even if they lower the power usage on PCI-E bus so its not overloading the PCI-E bus contacts( ATX12 v 2.2 and newer power supplies and motherboards, spec was around 2006 updated the ATX pins to 9A each from 6A, and 20+4 ATX MB power has 2 12v lines(the +4 adds one 12v line, done specifically for GPUs).. 2 x 12 x 9 is 216W.. alot more than 66W allowed for the 5 12v (1.1A each) contacts on the PCI-E bus),
anyway, back on topic, even if they lower the PCI-E bus power so it does not overload the contacts and than pull 150W from 6-pin, none of that extra power can go to whatever is attached to the 3 VRMs tied to PCI-E bus. They are isolated from each other.
Yes I see.
Removing the
Yes I see.
Removing the strain from the fan would only aliviate the draw from the PSU.
On this video stream:
https://www.youtube.com/watch?v=E_E2eqtm4Yw
… he explains how the 6-pin is in fact functioning like an 8-pin and why AMD did it like that (a smart guess).
At 13:04 he finds the solution for revision B. I think.
This card is a real beast… with to much “character”.
Lisa Su should get this guy “on-board”. Or at least offer him a new Fury-X 🙂
Anyone with a voltmeter can
Anyone with a voltmeter can now find out for themselves if the RX 480 cards have half of their VRMs tied directly to the PCI-E 12 volt supply pins. Buildzoid shows us how at timecheck 54 min:
https://www.twitch.tv/buildzoid/v/75850933
Sabotage or incompetence
Sabotage or incompetence ?
Any way you look at it, someone at AMD let this fiasco happen,
and ultimately raja kadori should be fired for letting something so easy to catch go unnoticed. AMD work 5 years to rebuild their image, and this was their one chance to do it… that opportunity is now gone for ever. AMD is even more of a joke to the tech community… Polaris reference polaris board is botched so badly it makes you wonder if anyone, anyone at AMD know what they are doing.
(and ‘wattman’.. lame name and execution… show AMD lack of expertise in software design)
BTW, didn’t AMD hired a known GPU tester not long ago from TechReport ? How come the board was NOT tested by AMD in the past 6 month anticipating the reviews we have… no equipment, or enough sample to do simple PCIe voltage test ? But TR and TomsHardware got the money and enought RX 480 to do it. but not AMD ? This situation is borderline criminal on AMD part…
Seem like a dozen + people at AMD should be held accountable for this.
From the deep analysis of the ref design, it seem many things went wrong with the RX 480. From the power distribution, to the to limited power delivery.
Now, it almost seem that AMD was expecting different form GF but it never materialized and in panic had to over-volt Polaris to its limit.
Or AMD was greedy and wanted to sell a $150 card for $200 by overclocking it past it design limits.
Its 1_ or 2).. both show AMD management to be incompetent.
I can only guess that AMD reputation is so horrible, no good engineer want to work there ?
It seem AMD is having the same problem it had 5 years ago.
The result was drop in sales.. and in turn drop in silicon order, and in turn Global Foundry penalizing AMD with billion $ penalties fees.
So AMD is once again facing the same tune… first in line, and will pay fines that will benefit the other GF customers.
It almost seem like its an “inside job”, or utmost incompetence on AMD management side
Yet. AMD is giving away million in shares and bonus to upper management for a “good job” …
From the outside looking in, it seem like Zen is going to flop, really bad, and vega will fall short by about 30% compared to Pascal. Forcing AMD to sell GPU at razor thin margins while Global Foundries collect its usual fat margin on raw silicon.
Makes you wonder if the
Makes you wonder if the rumors of Raja Koduri wanting to go to Intel is true and the process needed speeding up.
Can you please provide a link
Can you please provide a link to the source of this rumor?
Makes you wonder if the
Makes you wonder if the rumors of Raja Koduri wanting to go to Intel is true and the process needed speeding up.
Makes you wonder if the
Makes you wonder if the rumors of Raja Koduri wanting to go to Intel is true and the process needed “speeding up”.
“BTW, didn’t AMD hired a
“BTW, didn’t AMD hired a known GPU tester not long ago from TechReport ? How come the board was NOT tested by AMD in the past 6 month anticipating the reviews we have… no equipment, or enough sample to do simple PCIe voltage test ? But TR and TomsHardware got the money and enought RX 480 to do it. but not AMD ? This situation is borderline criminal on AMD part…
Seem like a dozen + people at AMD should be held accountable for this.”
I’m erring towards agreeing with much of this. AMD cocked this one up badly.
“BTW, didn’t AMD hired a
“BTW, didn’t AMD hired a known GPU tester not long ago from TechReport ? How come the board was NOT tested by AMD in the past 6 month anticipating the reviews we have… no equipment, or enough sample to do simple PCIe voltage test ? But TR and TomsHardware got the money and enought RX 480 to do it. but not AMD ? This situation is borderline criminal on AMD part…
Seem like a dozen + people at AMD should be held accountable for this.”
I’m erring towards agreeing with much of this. AMD cocked this one up badly.
“BTW, didn’t AMD hired a
“BTW, didn’t AMD hired a known GPU tester not long ago from TechReport ? How come the board was NOT tested by AMD in the past 6 month anticipating the reviews we have… no equipment, or enough sample to do simple PCIe voltage test ? But TR and TomsHardware got the money and enought RX 480 to do it. but not AMD ? This situation is borderline criminal on AMD part…
Seem like a dozen + people at AMD should be held accountable for this.”
I’m erring towards agreeing with much of this. AMD cocked this one up badly.
I’m erring towards agreeing
I’m erring towards agreeing with much of this. AMD cocked this one up badly.
I picked up my RX480 from
I picked up my RX480 from Akihabara on Saturday and have already gamed the crap out of it. Zero problems so far, even with GTA5.
I’d say if you have a decent motherboard and a decent power supply (coolermaster etc), you are not going to have any problems.
… immediately.
… immediately.
Yep. Give it anywhere between
Yep. Give it anywhere between 3-6 months depending on motherboard quality.
Can’t wait for AMD’s new
Can’t wait for AMD’s new driver fix release for this.. I thing third party partners of AMD will not have the same problems and i doubt that AMD engineers did not see this coming. Specifications are mere baselines not limits as these specifications tend to have higher offsets than what is stated. That is what you call engineering back there, specification is all about being on the conservative side of the design but you are not held by the specification, it is just your baseline, you can go further than that at your own cost. Either way, AMD should fix this. And we must also point out that this is the first review on a card on this level i guess? I was laughing when Ryan keeps on saying “Nvidia, nvidia, nvidia” during their video with Allyn. Allyn was like WTF Ryan?! ahahaha!
Maybe AMD knew the problem
Maybe AMD knew the problem existed and didn’t care. Why was 8 gig version released first? AMD knew it would get greater performance and thus reference benches would be set by it. Bigger numbers mean bigger sales.
If they couldn’t even beat last gen 970 convincingly, would have been merely an OK card. They wanted more sales than merely fanboy sales needing upgrade.
Partner cards are probably going to fix “current” problems but cost may be out of reach for some at closer to $300 than $200 price.
This is speculation on my part. AMD also may have had the fix ready to go if they managed to con everyone. Everyone would innocuously lose performance with next driver update. AMD may have given some lame excuse as to why performance went down a few % or not at all. Problem would have never been found. AMD would have gotten away with it if it wasn’t for those damn kids at PC Per and Tom’s and other sites that measured same way. Sorry couldn’t resist a Scooby Doo reference.
What drivers where used when
What drivers where used when Testing the Power of GTX 960? Nvidia 347.25 Beta Driver is what Tom’s used, drivers from over a year ago. more then enough time for the power to be worked out. AMD say they are fixing this with a driver in just a few days. are you using a driver 1.5 years newer then tom’s?
Tom’s thought of everything
Tom’s thought of everything and used an appropriate driver. The 960 Strix came out in January of 2015. So what difference does it make if PC Per used newer drivers. Both found that there was NO issue.
Maximum draw was measured at
Maximum draw was measured at 147 watts under torture. Correct me if I’m wrong but how does 166-200 watts of Rx 480 power draw over 6pin (75 watts max) + PCI express port (75 watts max) even compare. The 960 Strix isn’ even above maximum spec. Normal TDP is 120 watts. With over clocking you can add at most 20% which would put this at 144 watts. Even if the Strix briefly spikes over 66 watts on PCI express port it won’t do damage as the average is below spec. Sustained draw is where the Rx 480 is at. This causes damage over time.
https://www.techpowerup.com/reviews/ASUS/GTX_960_STRIX_OC/27.html
I have read this article with
I have read this article with a great deal of interest.
Many of the people posting are correct , stating it is not an intermittent surge that will cause damage – it is what you do consistently. However, my gaming son will only leave his PC for food and drink .
most people using this card will recognise that fitting it into their old system is like putting a ferrari engine in a truck. It will move – but be severely limited. They are therefore likely to update old hardware at the same time.
When measuring signals like this, you have to be very careful about noise. connecting your reference point to the wrong connection can give the appearance of surges – for example the very high spikes seen on the oscilloscope graph.
Any software fix will only have one effect – slowing the card down , the only viable term solution is to wait for the next version card to be released.
I have an AMD card in my main system and an Nvidia card in my ‘older’ system. Both are excellent for what I use and I own no allegiance to either manufacturer.
I will have enjoy this card – later.
I suspect that amd tried to
I suspect that amd tried to get the card to do more than it was speced to do late in the game. It is remarkably cheaply made. You could even say it is great engineering. The least cost for the most bang at a 230 usd price point. (8gb) Turn the card back down to the design spec where it is not the equal of a gtx970 and all will be good but the card become less compelling. At $260 the card isn’t quite as exciting even though it is still a nice chip. The 4gb card at $199 is a winning 1080 card but amd marketing apparently wanted more.
Got the Sapphire on Friday,
Got the Sapphire on Friday, complete system shutdown 3 times playing House of the Dying Sun.
Maybe 5 minutes of actual gameplay.
http://pcpartpicker.com/list/bnq8KZ
Same here, system shutdowns
Same here, system shutdowns just after a few minutes of gaming. This is on a ASRock FM2A75 Pro4 mobo.
Returning it while still can. I’d rather pay another 50-100$ for a GTX 1060 that doesn’t risk my board deteriorating and better performance.
I’m not an electrical
I’m not an electrical engineer, but it sounds like the 480 reference card is under powered (AMD’s fault).
So who’s surprised AMD failed
So who’s surprised AMD failed to deliver their magical perf maintaining power consumption tweak driver on the Tuesday they promised?