Radeon Software 16.7.1 Adjustments
AMD released a new driver today that they hope fixes the issues with power consumption. Does it?
Last week we posted a story that looked at a problem found with the new AMD Radeon RX 480 graphics card’s power consumption. The short version of the issue was that AMD’s new Polaris 10-based reference card was drawing more power than its stated 150 watt TDP and that it was drawing more power through the motherboard PCI Express slot that the connection was rated for. And sometimes that added power draw was significant, both at stock settings and overclocked. Seeing current draw over a connection rated at just 5.5A peaking over 7A at stock settings raised an alarm (validly) and our initial report detailed the problem very specifically.
AMD responded initially that “everything was fine here” but the company eventually saw the writing on the wall and started to work on potential solutions. The Radeon RX 480 is a very important product for the future of Radeon graphics and this was a launch that needs to be as perfect as it can be. Though the risk to users’ hardware with the higher than expected current draw is muted somewhat by motherboard-based over-current protection, it’s crazy to think that AMD actually believed that was the ideal scenario. Depending on the “circuit breaker” in any system to save you when standards exists for exactly that purpose is nuts.
Today AMD has released a new driver, version 16.7.1, that actually introduces a pair of fixes for the problem. One of them is hard coded into the software and adjusts power draw from the different +12V sources (PCI Express slot and 6-pin connector) while the other is an optional flag in the software that is disabled by default.
Reconfiguring the power phase controller
The Radeon RX 480 uses a very common power controller (IR3567B) on its PCB to cycle through the 6 power phases providing electricity to the GPU itself. Allyn did some simple multimeter trace work to tell us which phases were connected to which sources and the result is seen below.
The power controller is responsible for pacing the power coming in from the PCI Express slot and the 6-pin power connection to the GPU, in phases. Phases 1-3 come in from the power supply via the 6-pin connection, while phases 4-6 source power from the motherboard directly. At launch, the RX 480 drew nearly identical amounts of power from both the PEG slot and the 6-pin connection, essentially giving each of the 6 phases at work equal time.
That might seem okay, but it’s far from the standard of what we have seen in the past. In no other case have we measured a graphics card drawing equal power from the PEG slot as from an external power connector on the card. (Obviously for cards without external power connections, that’s a different discussion.) In general, with other AMD and NVIDIA based graphics cards, the motherboard slot would provide no more than 50-60 watts of power, while any above that would come from the 6/8-pin connections on the card. In many cases I saw that power draw through the PEG slot was as low as 20-30 watts if the external power connections provided a lot of overage for the target TDP of the product.
Starting with the 16.7.1 driver, AMD will automatically reprogram the power controller on the RX 480 to better divide the power draw from the 6 available phases on the reference cards. This is a process that occurs at each and every boot, it is not a permanent change to the VBIOS. It’s possible, and likely, that future cards and partner cards may have this change integrated at a lower level, negating the need for the driver to recognize and update the controller logic. But for now, with launch RX 480s in the wild, that’s how the process works.
As I understand, what AMD is doing now is very similar to what The Stilt in the Overclock.net forums attempted earlier in the week. Power phases 1-3 that source +12V from the 6-pin connection are now given more time than phases 4-6, thus shifting the weight of power draw towards the 6-pin connector. We’ll be able to calculate that exact ratio when we show you the power consumption data from our testing, but the goal is draw less power from the PCI Express slot and more over the 6-pin connector while maintaining the exact same power and performance profiles. To be clear: this fix will not affect performance and my testing shows that to be the case.
You have to wonder why this wasn’t the direction taken by AMD engineers initially. As you will find in our results, the 6-pin connection is definitely drawing more than the 75 watts that it is rated at, but the 6-pin cabling and connectors are actually rated at 8-9A per pin, rather than 5.5A total for the PEG slot. If you are going to draw more power than rated over one of the two options, it’s clear that the over engineered and directly fed 6-pin connection is the way to go. I can only assume it was an oversight on the board team that allowed it to happen. (Also, let’s not forget that using an 8-pin connection and weighting it towards that would have prevented both issues.)
Compatibility Mode
Also included with the 16.7.1 driver is a new toggle in the global settings called Compatibility Mode. It’s an absolutely mind-numbing name for a feature that simply does one thing: lowers the total target power draw for the GPU.
This fix was actually the first we tested from AMD, though it wasn’t enough to alleviate the problems with power draw from the PCI Express connection. This adjustment does nothing to change the weighting of power draw from the two +12V sources and instead only focuses on lower total power draw of the GPU. Yes, this does mean that there are going to be some cases where performance drops, though I have seen articles talking about performance increases with undervolting.
AMD indicates that we should think of this setting as a secondary solution, one that is NOT enabled by default, for any users that might be overly concerned about power and current draw on their motherboard or power supply. As I will show you in our testing, the differences in power draw are definitely measurable, and performance seems to be impacted minimally in the couple of spot checks we have done.
thanks.
i’ve 1080 ordered.
thanks.
i’ve 1080 ordered.
if your budget can
if your budget can accommodate a 1080 it’s pointless to consider a card ~25%-30% the price anyhow.
Are we sure this is a driver
Are we sure this is a driver review or is it republicans foaming at Hillary’s emails?
Well, Good for you! Now go
Well, Good for you! Now go away.
I just had a pizza, it was
I just had a pizza, it was delicious.
It wasn’t made by nVidia, so that must piss you off.
Then you’re Ignorant to come
Then you’re Ignorant to come here!
this article is about card with price $200 but you bought card with price above $600 yet you came here and said : Thanks , I bought it ?
Wow $600 for 1080, we get to
Wow $600 for 1080, we get to pay $1100-1300 in Australia
700 US Dollar equals 925.83
700 US Dollar equals 925.83 Australian Dollar the rest is VAT I suppose so it’s not really surprising of the cost right?
https://youtu.be/BHqgHFcmAOc
https://youtu.be/BHqgHFcmAOc
You are ignorant to claim
You are ignorant to claim that a RX480 8GB can be found anywhere for $200.
Although you are technically
Although you are technically correct, many people have already gotten 480 8GB models for $200. First, best buy sold them at the wrong price by mistake, then it was discovered that all of the launch 4gb cards were 8gb cards with a 4GB sticker slapped on. They rest of the memory can be unlocked by flashing the vbios. http://www.legitreviews.com/amd-radeon-rx-480-8gb-vbios-now-available-flash-4gb-card-8gb_183753
You are also foolish to think
You are also foolish to think that any GTX 1080 is going to sell for $699 or even $799, almost all GTX 1080 I had seen selling on ebay goes for $899. It goes both ways son.
Damn dude, why so many?
Damn dude, why so many?
So you live in a Rusting
So you live in a Rusting double-wide and you have forced your Swife to work an extra shift for two weeks at the sugar shack to pay for Nvidia’s overpriced 1080 FE gimped Kit! Your teeth are even green to match Nvidia’s colors, have you ever used tooth paste!
Could be anedia troll. I do
Could be anedia troll. I do not believe what an anedia fan writes.
Moron
Moron
I ordered 2x RX 480`s on
I ordered 2x RX 480`s on release day and was not bothered by a bit of extra power, but am happy the power is even lower now.
there are at least two power
there are at least two power efficiency features they havent implemented it yet. „boot time power supply calibration“ and „anti aging compensation“, so power efficiency should get improved even more by then.
I also ordered a 1080 after
I also ordered a 1080 after the fiasco. Read a lot of real people’s post that actually fried their motherboards. I was 100% ready to support AMD with 2x RX480’s but not worth messing with an unfinished product.
Are you st oopid? How you
Are you st oopid? How you were considering the RX 480 and suddenly considered the GTX 1080? Both cards aren’t even on the same price/performance bracket. Plus why bashing AMD with the “unfinished product” when the GTX 1080 is a paper launch that lacks of Async support, it does not have preemptive pixel enabled, it also lack of Mixed resolution support and no availability?
Yeah, but its normal for
Yeah, but its normal for nvidia to present features they implement months after, but for AMD its reason to not buy their product. „boot time power supply calibration“ and „anti aging compensation“ should be implemented in next WHQL though.
S/He said 2*RX 480. When CF
S/He said 2*RX 480. When CF scales the performance is quite near to gtx1080(Of course it does not scale so often):
https://www.techpowerup.com/reviews/AMD/RX_480_CrossFire/19.html
So is this storm in the
So is this storm in the teacup finally over?
I guess this is what happens if you very much want to only have one 6-pin to get into all those OEM contracts. Seems like it has worked, Dell sells the 480. Add to this that you want to release early on a new process that Global Foundries are supposed to deliver on. I think there is notable differences power-wise between the early batch of cards and the batches that have arrived just weeks later. But that would mean testing many cards from the same batch and compare them, not really practical.
I think the most noteworthy
I think the most noteworthy thing is that the card gets the same performance on the new driver using less power under compatibility mode as the old one. I love how AMD responded to this. Imagine if they had a similar R&D and driver development budget as Nvidia…
They had to bump the perf by
They had to bump the perf by 3% to offset the drop in performance due to the power change.Performance that would have gone to the card anyway.. So really, it is worse..
If they had the same as nVidia they wouldn’t have fudged that part to start with and it might have 15% better performance (gtx 1060 rumor) so that it could actually compete.
worse is subjective depending on how you look at it.
Your glass looks half empty,
Your glass looks half empty, can I offer you a refill.
Are you a complete numbskull
Are you a complete numbskull or just have poor reading skills? The card does not lose performance with the fix. The power is rebalanced so that more power is drawn from the 6-pin than from th epci-e slot. The card is still using the same power as before when compatibility mode is OFF and performance is also slightly increased.
He was talking about the
He was talking about the dynamic OC potential (boost), which this card just sacrificed, since it need to always be “boosted” to meet minimum advertised specs.
Good to know. It still
Good to know. It still exceeds specs unless you turn on the compatibility mode (and even then, still sometimes), but it damn sure is a lot better than before.
This was just an example of shoddy quality control/testing and never should have been allowed to ship. This, coupled with the “some have 8GB and disable 4GB of it” just shows you that AMD was rushed to get product out the door and sacrificed quality control/testing in an attempt to scoop nVidia.
No wonder AMD is $2.2 billion in debt and hasn’t shown a profit in the past 12 straight quarters. They have serious management issues, and it shows.
Not really bad management,
Not really bad management, nvidia released early so they had to in order to pull in any amount of revenue that would matter. 1060’s have been on the way, they’re just stock piling so that they will have a notable quantity when release day hits.
You would think the engineers would have considered the “cut down” approach that nvidia takes a bit more seriously (1 board, lots of cut downs) to mitigate costs. Doesn’t seem like something they’re interested in; however, they sure do now know how to break QC and pass the “savings” onto the customer lol
Could you be any more of a
Could you be any more of a paid Nvidia FUDster! The 4/8 was for the reviewers, but some folks got a little Easter egg memory gift out of that 4/8, and at least it’s not the 3.5 = 4 Nvidia nonsense.
Let’s thank AMD for designing their GPUs for more than the gaming GITs in mind, so AMD’s GPUs will perform better for computational uses and other graphics uses. Nvidia’s problems will become more apparent as the entire gaming industry switches over to using DX12 and Vulkan and especially for the VR gaming, where AMD’s Asynchronous Compute fully in the GPU’s hardware will result in much lower VR gaming related latency. Nvidia will not be able to code their way out of having a lack of fully in the GPU’s hardware Asynchronous compute and processor thread scheduling and dispatch like AMD has in their GCN and GCN/Polaris GPU micro-architecture.
AMD has been doing that for a
AMD has been doing that for a long time, for example selling 290 that can be unlocked to 290x at launch to capture more market share, it’s nothing unusual…
I really thing software will win nvidia the graphic card market… their new tech is very interesting…
“AMD’s Asynchronous Compute
“AMD’s Asynchronous Compute fully in the GPU’s hardware will result in much lower VR gaming related latency.”
Async compute has absolutely nothing to do with latency for VR. The key feature for VR is pre-emption for injection Timewarp just before buffer readout, and this is ALREADY achieved on older cards from both vendors.
This is obvious if you actually know what Async Compute is: the ability to run compute operations in parallel with graphical operations. For VR, you are performing two graphical operations in series: rendering an image, then warping an image You cannot warp the image before you’ve rendered it, so to parallelism gains there. You cannot ‘pre-warp’ a preceding frame to be ready ‘just in case’, because the whole POINT of timewarp is to sample the IMU IMMEDIATELY before warp for minimum possible latency.
Bro I live in brazil, and
Bro I live in brazil, and 1070 is 2200 BRLS at the same time AMD e prices are even more “taxed” because people that “import” AMD want even more profit from costumers… like RX 480 is 1450 BRLS…
Which card do you think is best for gaming in 1080p? (most mmo’s)
my spec: i5 4690k, ga-h97m-d3h – 1tb hdd WD
And why should a pick a RX 480 card , if i just want to play dx11 games mostly, because most mmo’s will not support dx12 for a long time…
thank you bro
Exceeding the 6pin power spec
Exceeding the 6pin power spec is much safer than exceeding the PCIe power spec. Several orders of magnitude safer.
Finally some1 recognizes that
Finally some1 recognizes that the 6pin power connector is the way to go if you need to draw more power than the specification. Hell, I’ve seen so many rigs with 6pin-8pin adaptors, and they run perfectly fine. And that is up to double the spec of the 6pin connector!
6-pin PCI-E power connectors
6-pin PCI-E power connectors are ridiculously under-spec’d. 75W is a crazy-conservative spec for that connector. Look at it this way:
the 6-pin connector has three yellow wires and three black wires. The center yellow wire is not powered (or is not supposed to be). The center black wire is not a ground, it is a “sense” – it IS a ground wire on the PSU side of the connector, but it doesn’t carry any operating current, it’s only there so the card can confirm that it has a 6-pin power connector plugged in. This leaves two yellow “+12V” wires and two black “Ground” wires.
The 8-pin connector has the same six wires with the same arrangement. It adds two black wires on the PSU side, and they’re both ground wires. On the GPU connector, one pin is a second “sense” pin – when that pin is grounded, the card knows that an 8-pin power connector is plugged in – which allows the card to draw power from the middle yellow “+12V” wire (the one that is not powered in a 6-pin connector). The other additional pin is a third “Ground” connection to accommodate the extra current provided by the third “+12V” wire.
So the 8-pin connector is rated at 150W, using three +12V wires and three Ground wires. The 6-pin connector has two +12V wires and two Ground wires. If each wire is capable of at least 50W, shouldn’t the 6-pin be capable of 100W at least over its two wires?
So, you wanted to see the
So, you wanted to see the 3.5GB + .5GB under the sticker. Hmmm.. True Diehard NV Idiot, I see.
I think it is misleading for
I think it is misleading for AMD to put the PCIe power fix and game/Polaris driver improvements in the same driver update. I would like to see performance results that aren’t effected by the “optimizations designed to improve the performance”. Take a workload that isn’t effected by the Polaris “improvements” and see how the PCIe “fix” really effects performance.
Really? It’s misleading to
Really? It’s misleading to include performance tweaks in a graphics driver update, just like nearly every single driver update ever in the history of graphics drivers?
What? You can get
What? You can get that…just don’t use compatibility mode and you’ve got the 3% performance increase (plus power draw on the PCI slot reduced significantly). Want lowest power draw…keep the same performance at launch and flip the switch.
Why do I get the feeling that
Why do I get the feeling that if Nvidia were to implement a hardware fix in a driver update (say, for example, Maxwell’s tendency to idle at a really high GPU frequency when powering a 144Hz monitor, driving up idle power consumption by 70W) and they included other “improvements” and “optimizations designed to improve the performance” in that driver, you would not only be completely fine with it, but you would in fact argue the point with someone else who made the exact same comment you just made?
Why not get a flir camera and
Why not get a flir camera and check temps at the PCI-E connector instead of going crazy about how the card is still too high on current draw. OMG!! It’s still higher than 5.5 amps!! (sarcasm) If the connector fingers are within temp to not melt the plastic housing then I see no cause for concern….
Also, all the power circuitry for the PCI-E connectors are connected PARALLEL to each other, so if more than 70 watts was going to destroy the motherboard, then you could never have more than one video card on the PCI-E buss…
No.
You aren’t worried about
No.
You aren't worried about melting plastic. Its about burning up pins.
VOLTAGE goes through connectors in parallel, not current/power. Only the underlying copper plane deals with the combined power/current.
Actually Ryan. The melting of
Actually Ryan. The melting of PVC can have severe long term health issues, including but not limited to hormonal disorders.
PVC will start emitting fumes at above 70c.
That being said. I will provide you with a thermal image of a R9 295×2 with a 7% overclock to its core. Running Unigine Valley.
The connectors are fine even at 490w power draw on 2x 8pins. Which is about 160w out of spec (R9 295×2 only draws about 30w or so from PEG according to reviews)
http://i.imgur.com/jjNqeUB.jpg
This is exactly what Ryan
This is exactly what Ryan should have tested since he made such a big deal about exceeding the 5.5 amps. It’s the heat build up you have to worry about for any place where a connector is used because it is the weak points due to higher resistance at those points, thus you get higher I^2R losses as current draw goes up. If there is no major heat build up that can damage the plastic then there is no issue with pulling more current than spec allows.
There’s PVC in computer
There’s PVC in computer hardware components? O.O
Sure, why not? It’s cheap,
Sure, why not? It’s cheap, easy to work with, does a very good job as an insulator at operating temperatures below 70°C, and in this particular use case (computer hardware components), they’re (supposed to be) being applied in an environment that will pretty much never come anywhere close to temperatures at which PVC may start to break down. (Except in circumstances catastrophic enough that whether or not PVC is involved is probably the least of your worries.) XD
That being said, I wouldn’t doubt it for a second if standard Molex connectors were made of PVC or based on it. But I doubt PCI-E slots would be. They’re probably closer to a high-temp thermoplastic like Rynite or something.
The 6-pin connector was never
The 6-pin connector was never the issue. Ryan has said multiple times that its less of an issue if the 6-pin was overdrawing as the connector can more than handle it, and the power is coming from PSU. The important part is the amperage coming over the pins in the PCI-E connector on the motherboard.
What pins are you talking
What pins are you talking about? Pins that run through the connector into the motherboard? Those touch plastic…which would melt first.
In watching all of this and I have to wonder, where is the control PCBA? Have you measured a video capture card? A Nvidia GTX 970/980? AMD Fury? Why are we not comparing these measurements to different cards? It really does seem blown out of proportion to me, but that’s just me.
Burning pins indicates higher
Burning pins indicates higher I^2R losses due to heat which is from increased higher current draw, which means it WILL melt plastic. However, a one or two amp increase is not going to generate enough heat to do that.
You need way more.
What? So you are trying to say that if I parallel a few devices onto one wire branch that the current will not flow through this branch? HUH?? The + source path ties back together on the motherboard and the ground paths all tie to the ground plane. It branches out to the individual PCI-E slots, but the total current still goes through the main source path. So, like I said, if damage was going to occur to the motherboard then putting multiple cards into each slot would have done that already.
The current limit of 5.5 amps is a PER SLOT basis. It’s the slot pins that are the weak point…
don’t worry about copper
don’t worry about copper burning. The plastic will be melting first.
Let’s give AMD credit for
Let’s give AMD credit for sorting this out quickly. GG AMD.
i’m sure Ryan nvidia shill
i’m sure Ryan nvidia shill will find something lese to cry about.
Did you not even read the
Did you not even read the article?
In Ryan’s conclusion:
“But I do believe that AMD has done it’s best to address the power consumption concerns without a hit to performance, getting the RX 480 to a much more reasonable power situation. I no longer believe that consumers should be worried about the stability of their PCs running the RX 480 with the 16.7.1 driver installed.”
AMD did a mistake, Ryan was
AMD did a mistake, Ryan was happy to make a full analysis. Now it is time to cover up. No reason to burn all the bridges to AMD.
But you are not going to see this site making a full analysis of Nvidia’s problems. Only linking to other sites in case the problem is already known.
Seriously? That’s what you
Seriously? That's what you come in here with?
You where really happy in the
You where really happy in the previous article when I was saying that AMD messed up. Now you have a problem?
https://pcper.com/reviews/Graphics-Cards/Power-Consumption-Concerns-Radeon-RX-480#comments
So tell me. When was the last time you gone with a multi page analysis of problems on an Nvidia card? And have you updated that analysis?
No analysis for the fan problems on 1080 cards.
https://www.techpowerup.com/222895/nvidia-gtx-1080-founders-edition-owners-complain-of-fan-revving-issues
No analysis for the DVI problem on Pascal.
https://www.techpowerup.com/223669/geforce-gtx-pascal-faces-high-dvi-pixel-clock-booting-problems
No analysis for the Vive problems on Pascal.
http://www.tomshardware.com/news/nvidia-vive-displayport-incompatible,32204.html
No recent analysis for the high idle power problems. You never continued that old article because Nvidia never fixed that and the problem appears also to Pascal cards.
http://techreport.com/news/30304/nvidia-pascal-cards-still-exhibit-high-refresh-rate-power-bug
No analysis for the DPC latency and stuttering problems on Pascal.
https://forums.geforce.com/default/topic/941579/geforce-1000-series/gtx-1080-high-dpc-latency-and-stuttering/1/
There where also people talking about throttling problems on Pascal. No analysis here either.
So, where are your analysis about the above? You only just barely mentioned the fan issue and the Vive problem. You never really spend time on them. You avoid it. I checked ALL the articles in the last pages of the Graphics Card section. Nothing. Nothing negative for Nvidia. NOTHING.
I really think Raja did a mistake giving you an exclusive for RX 480. You will never be objective. He should stop trying.
None of the things you
None of the things you mentioned above are important. The RX480 problem was important. The GTX 970 3.5+.5gb thing was important, and PCPER covered it. Get over yourself, these guys at pcper are enthusiasts, not fanboys.
I agreed with you in the
I agreed with you in the previous article because you were able to make a good comment that contributed to the discussion. If we looked at every little bit picky thing that came up with every card, there would be a crap load of posts about AMD as well, which we can't do because of folks like you, so we stick to the big issues.
Good comments are not the
Good comments are not the comments that you agree with them. You are missing the point of the comments section and the whole reason of the forums’ existence. And you never miss little bit picky things about AMD cards. There is always an analysis about those.
PS The excitement on your
PS The excitement on your face in a previous webcast, when Ryan was saying that GTX 1060 is coming and doesn’t leave much time for RX 480 to play alone in the market, was priceless. :p
https://www.youtube.com/watch?time_continue=3912&v=-rHG9hfTCGw
Yes, we are all so shocked
Yes, we are all so shocked that enthusiasts are excited about new hardware. How about you look less than a minute from that point where I was assuming the 1060 would come out to higher cost/perf than the 480.
It’s Nvidia. It’s like
It’s Nvidia. It’s like predicting that the sun will rise from the east.
Anyway, the whole mess with RX 480 was a good opportunity for sites to show to the people some examples of cards that will be using much more power from the pcie bus or/and the pcie power connector under overclocking. That opportunity has come and gone. People will keep overclocking their cards thinking that the only thing they should care and worry about are temps.
Well yes it would be
Well yes it would be interesting to see I agree, does pcper have some non 6-pin pcie connector gtx750ti or gtx950 laying around(or radeon R7-250). Check the power with OC with tdp slider to max. If I remember correctly at least with gtx750ti bios has it restricted to 75W. Don’t know about gtx950 though they are not designed by nvidia itself, but same biosses they use and same kind of restrictions there are(Hint. you need to hard mod your card or edit your bios to bypass bios power restrictions).
I would not really be that concerned about pcie power connectors though. Although 6-pin connector is rated to 75W, it can quite safely pull power more than double. So if you have to go out of spec is best to do by those rather than pcie slot.
750Ti is a 60W TDP card. It
750Ti is a 60W TDP card. It shouldn’t have problems, even after overclocking it. GTX 950 on the other hand is probably at it’s limits at defaults. Overclock it and you got an RX 480 in your slot.
I wouldn’t be using GTX 950 as an example, if W1z at TechPowerUp, wasn’t getting 20% extra performance after overclocking it. That 20% extra performance for me is an indication that the card is NOT power limited. Until now, I get plenty of insults, but no one said that I am wrong in that assumption.
6pin is considering more safe, but what happens when you use a 600W PSU that costs 25$ and it looks like using a design targeting Athlon XP systems?
http://www.ebay.co.uk/itm/ACE-600W-Black-ATX-Gaming-PC-PSU-Power-Supply-120mm-Red-/201449961886
Well you can check the bios
Well you can check the bios limits by yourself(I don’t have windows machine near me right now, and bios tweaker did not work with wine):
Asus GTX950 no 6-pin bios
Maxwell II BIOS Tweaker
For somebody who thinks Ryan
For somebody who thinks Ryan Shrout is an Nvidia shill, you tend to visit his site and youtube channel a whole lot.
I’m sure he appreciates all the hits and views he has been getting from you…keep on keepin on.
Common man Pcper,
Common man Pcper, Tomshardware and others pointed out the problem and AMD fixed it in a short amount of time.
I think all parties deserve credit here.
It shouldn’t have happened in the first place but shit happens.
Yes definitely , GG AMD.
Yes definitely , GG AMD. Gonna wait on a view full reviews of the GTX 1060 before a few of my friends will purchase.
At least it was addressable
At least it was addressable via a driver update. Would have been a massive, expensive blunder if it had required hardware revision and recalls.
Addressable it was, not like
Addressable it was, not like that 3.5 and 0.5 memory fiasco that the Nvidia users got shafted with. I wonder how that GTX 370 class-action lawsuit is going, talk about the Green Gimping on that one!
Again with the 3.5 gig BS. It
Again with the 3.5 gig BS. It has a full 4 gigs in fact a benchmark of Guru 3d shows a 970 using slightly more than 4 gigs in Hitman DX12. Impossible I know.
http://www.guru3d.com/articles_pages/hitman_2016_pc_graphics_performance_benchmark_review,9.html
The 970 is such a gimped card that a new Rx 480 barely beats a stock one with 2x-2.3x the ram. Except it doesn’t beat a heavily overclocked one and isn’t as power efficient. Yes but look at the directx 12 performance in less than 12 games total. Directx 11 has how many more LOL.
The Rx 480 will be pushing up electronic daisies before dx12 is relevant.
Nvidia’s cards do not improve
Nvidia’s cards do not improve much over time, and Nvidia even does things to keep the older hardware gimped to keep its customers on that upgrade treadmill. Even some of AMD’s older SKUs will benefit from DX12 and VR/Async-compute. Just look at the raw compute stripped out of Nvidia’s SKUs to see where Nvidia is getting it’s power savings from, and it has nothing to do with Nvidia’s consumer GPU micro-architecture, as the Nvidia hardware stripping for power metrics continues unabated. AMD’s GPU cores have more hardware resources so the power usage will be higher, but wait until the optimized DX12/Vulkan Games/VR Games make good use of AMD’s GPU async-compute, and better computatainal compute for Gaming and other graphics, and non graphics uses.
Those CPU like AMD GCN ACE units will be good for a lot of different GPU acceleration tasks like gaming physics and Ray Tracing acceleration done on AMD’s GPUs. Nvidia strips out it’s compute and forces its users into its very costly Pro GPU solutions while AMD has more compute remaining in its consumer SKUs for users to take advantage of. Nvidia is all about the milking and its customers are the cows, and Nvidia loves its cash cows cooked up real well. The Green Gimping continues as usual, and the fools rush in with their payday loans in hand to be overcharged to the max.
Nvidia says this about dealing with its Cows: Don’t try to understand them just fleece them up and scam them…
edit: computatainal
to:
edit: computatainal
to: computational
WTF LibreOffice that dictionary needs some work!
This is slightly true but as
This is slightly true but as always you get a fanboy going a bit to far. Get over it a 480rx performs like a 970 despite being on 14nm instead of 28nm. The 480rx is an amazing card for the money BUT as I expected awhile ago Polaris does not come close to Nvidia in terms of having the most efficient architecture. I however will recommend the 480rx for quite a bit of builds
I guess you don’t remember
I guess you don’t remember the hd 6000 series of cards that weren’t supported well by AMD once the node change to 28nm. Those cards were obsolete one year after being made.
Once cards were made GCN architecture, any driver update raises performance on all older GCN cards as well. Don’t think that AMD is doing this for your benefit. It’s always cheaper to build on same architecture than come out with something new. They only did this because of debt and lower R and D budget. You’re deluding yourself if you think otherwise. AMD hurts their bottom line because if you can have an older card that performs well there is less of a need to upgrade. But it is cheaper to make and manufacture so if they sell less they can still make money.
Polaris is a decent card but it is still GCN so it’s nothing new really aside from some VR support and primitive rasterization. Can’t really massively improve efficiency when it is still GCN. It’s sad that Polaris is 14nm and Pascal 16nm but Pascal is way more energy efficient. It’s good efficiency for an AMD card but it only comes through node shrinkage from 28nm and maybe some slight tweak of the architecture.
When you don’t get anything new (innovation) with AMD, you’re actually the ones being fleeced by paying your hard earned cash for the same old same old.
AMD is purposely stagnating the graphics market by utilizing their console dominance. The console core is based on hd 7850 or 7870. Very old indeed. Directx was basically tailored to their cards because of their relationship with Microsoft and their Xbone system.
Wait until some of the other feature of dx12 get supported that Nvidia has in their architecture. GCN will feel it’s age then.
You milk a cow and fleece sheep. LOL
I have an RX 480 4GB
I have an RX 480 4GB coming…
AMD response time and solution satisfied my initial anger.
So now, just crossing fingers that my card can undervolt decently.
Now, the GTX 1060 is still interesting at $250, if a blower version is available. ($299 for the reference is a bit much)
The 1060 should be able to overclock pretty high in its 150w limit.
possibly enough to be 20% faster.
That to say, I would have ordered a GTX 1060 is the “founder edition” was $250 and not $300.
BTW, PCPer.. how low of a voltage where you able to run your RX 480 at stock clock?
People on reddit r saying
People on reddit r saying 1.3ghz at 1.075v on the luckiest silicon lottery, average undervolt is 1.1v on 1.3ghz.
Also people on OCN.net (bios modder) r saying than memory strap r maxed on 2000mhz, so theres no point on overclock memory 🙁
Thanks for the data
Thanks for the data points.
LegitReview made it seem like 1.05v was ‘easy’
They reported only about a ~15w saving going from 1.15 to 1.05 So 10% lower voltage, result in 10% lower power usage.
I wonder if someone as a breakdown of power usage. Some say the fan can use significant power at high speed…
Another data point, the silicon doesn’t seem to be heat sensitive.
It doesn’t work any better at lower temp.
Anyways, looking forward to play with card to find its sweet spot.
As AMD/GF’s(licensed from
As AMD/GF’s(licensed from Samsung) 14nm process node improves over time expect that RX 480 silicon will have more power usage improvements and better clocks. All silicon process are heat sensitive so there is some variable you are missing or not understanding. Leakage increases with heat and any under-volting improvements are probably dew to less power throttling and higher average clocks/lower clock variance at the lower voltage points.
There are a lot of speed/temperature/electrical software/hardware control loops interacting on the RX 480, so things can appear counter-intuitive. A GPU’s regulation/governing is a very complex stochastic process involving thousands of lines of firmware/software code with hardware based control systems, so the tweaking will go on with the RX480 until its replacement is announced and on the market, and probably a good while after that should it be needed.
If saving power is the
If saving power is the objective you need to lower the thermal/power threshold. That will reduce overall power consumption by keeping the clock speeds down.
Then reducing the voltage will pull performance back upwards, as it allows the GPU to run faster by using less power per clock cycle.
Yet you hear none regarding
Yet you hear none regarding nvidia fuckups. Especially not years and years later.
I know man…it’s crazy.
I know man…it’s crazy. Feels just like the Clinton’s VS the average US citizen. I bring this up because some joker posted up top about republicans frothing about her email system. WELL…if MY company had even lost !1! ITAR email to a hack or negligence, the owner would be in prison for 10 years and we would be out of business. Little bit of a double standard? Yes, I do think so.
That doesn’t get AMD out of the park when they mess up. Each company should be held to the same standard. Unfortunately there are a lot of politics involved. oh and fanboiz. Or however you spell it.
Meh politics should stay out
Meh politics should stay out of this site and I wish my life to since anyone who claims either party is good is wrong and anyone who likes Hilary or Trump is wrong.
That’s what AMD fanboys are
That’s what AMD fanboys are for. They never forget anything just like a nagging wife.
Even if 970 actually did only have 3.5 gigs, it doesn’t matter as it was the best selling video card in history because of performance/price ratio.
I don’t get where everything is swept under a rug. Most of their blunders are minor. If you read an Nvidia driver log you know what open issues are and see when they fix things too. Gotta love that transparency.
AMD flat out lied about Rx 480 TDP. Wait is it 110 watts. Nope only GPU. Is it 150. Nope again even in compatibility mode it still is not hard locked at 150 watts.
The Pascal cards are at TDP because they throttle to maintain it. When you advertise something better live up to it.
AMD says that the 470 is going to be the 2.5 times more energy efficient Polaris but their roadmap advertises the whole Polaris architecture.
AMD is constantly ramping up their cards and power consumption to stay competitive with Nvidia’s last gen cards at stock settings. A more efficient architecture usually allows a decent overclock as well. If you compare max overclocked card to same. The Nvidia is usually on top and does it with less wattage.
Maybe benchmarks should be locked at the same wattage. What would you think of AMD’s great benchmarks then?
It reminds me of FreeSync
It reminds me of FreeSync Ghosting sh.. storm. Congrats.
AMD
———
Found problem
AMD
———
Found problem in AMD hardware
AMD fixes it in 2-7 days.
Everyone blames AMD for being incompetent.
==============================================
Nvidia
——–
Found problem in Nvidia hardware
Nvidia says it will fix it in next drivers. Usually they don’t. Months latter users ask if Nvidia have forgotten them
http://www.tomshardware.com/news/nvidia-vive-displayport-incompatible,32204.html
https://www.techpowerup.com/222895/nvidia-gtx-1080-founders-edition-owners-complain-of-fan-revving-issues
http://techreport.com/news/30304/nvidia-pascal-cards-still-exhibit-high-refresh-rate-power-bug
https://www.techpowerup.com/223669/geforce-gtx-pascal-faces-high-dvi-pixel-clock-booting-problems
https://forums.geforce.com/default/topic/941579/geforce-1000-series/gtx-1080-high-dpc-latency-and-stuttering/1/
Everyone praising Nvidia for being the perfect company
AMD did NOT fix the
AMD did NOT fix the non-compliance on the PCI-e connector.
Quote: With the original launch driver we saw the PEG slot pulling 6.8A or more, with the 6-pin pulling closer to 6.6A. On 16.7.1 the PEG slot draw rate drops to 6.1-6.2A. Again, that is still above the 5.5A rated maximum for the slot, but the drop is significant.
So yes AMD deserves all the blame for releasing this non-compliant card in the first place. And their so-called fix is still 12% over the 5.5A max.
Even in compatibility mode it is still non-compliant.
Quote: Current still doesn’t make it down to 5.5A in our testing, but the PEG slot is now pulling 5.75A in our worst case scenario, more than a full amp lower than measured with the 16.6.2 launch driver.
You can quote me again if
You can quote me again if EVER Nvidia fixes one of those bugs. Some go back for years, from Maxwell, others are just only one month old.
Had the most issues with AMD
Had the most issues with AMD hardware in internet cafes here and tired of fixing and tweaking them to work properly. Eventually they ditched those systems and upgraded to INTEL and Nvidia – Fermi, Kepler and maxwell only and they making more profits now, hardly ever have to repair those gaming pc’s.
You’re welcome to visit Thailand and see the many gaming internet cafes running these systems day & night FYI
Fan problems on 1080
Fan problems on 1080 cards.
https://www.techpowerup.com/222895/nvidia-gtx-1080-founders-edition-owners-complain-of-fan-revving-issues
DVI problem on Pascal.
https://www.techpowerup.com/223669/geforce-gtx-pascal-faces-high-dvi-pixel-clock-booting-problems
Vive problems on Pascal.
http://www.tomshardware.com/news/nvidia-vive-displayport-incompatible,32204.html
High idle power problems on Pascal cards.
http://techreport.com/news/30304/nvidia-pascal-cards-still-exhibit-high-refresh-rate-power-bug
DPC latency and stuttering problems on Pascal.
https://forums.geforce.com/default/topic/941579/geforce-1000-series/gtx-1080-high-dpc-latency-and-stuttering/1/
There where also people talking about throttling problems on Pascal.
Enjoy your “problem free” Nvidia cards. Especially that power consumption problem at idle is perfect to drive the power bill of an internet cafe to the roof.
Non issue for our GTX 1080’s.
Non issue for our GTX 1080’s. And anybody with a Vive uses that on the HDMI port and DP goes to your monitor. If you’re still using fucking DVI you have no business buying or owning such high en VR gear!
The very few and I mean FEW cases is not wide spread as the RX 480 powergate issue is Mr. AMD fanboy trying to damage control and expose a reputable tech site like PCPER. Pls get IP banned already making the comments section even more toxic than it already is…
Hah talking about powerbill,
Hah talking about powerbill, by using all Nvidia and Intel that got lowered significantly over the junk AMD hardware we got rid of!
We are saving on power and repair, how about that double combo right there.
Your head must be so far up AMD’s ass its not even funny anymore
Play one 1080 p video on a
Play one 1080 p video on a Radeon and your power consumption is through the roof. AMD finally addressed this after what 4 years with Polaris. Supposedly got a 30% reduction but more is still more. Still pulls an impressive 39 watts vs 7 watts for Pascal.
Others are substantially higher.
How about multimonitor 40 watts for Rx 480 vs 10 watt range for most Nvidias.
https://www.techpowerup.com/reviews/AMD/RX_480/22.html
Sorry John. At idle Pascal is 6 watts and Polaris is 15 watts.
Similar node tech 16nm vs 14nm. You’d think a mainstream Polaris card would beat high end Nvidia with 2nm bigger node. Nope again.
Oh. My bad still talking about monitors above 120hz. Isn’t really that big of an issue. Just lock monitor to 120hz and the problem suddenly isn’t a problem. You’ll be counting the money you’ll be saving.
Running a gaming cafe where gaming consumption is usually substantially less on the Nvidias are more efficient is going to save them lots more money.
Pascal just came out a little
Pascal just came out a little while back. You gotta give them some time to fix things.
Give Polaris time. I’m sure more issues will pop up there. Especially with Vega as it’s supposed to be new architecture, right?
After all Windows is still fixing bugs up until the time it goes to nonsupport status.
Everyone has problems. Some can’t be fixed easily without hardware revision.
I’m glad AMD was able to defuse their problem some. However it is still drawing amperage over spec if I understand correctly.
So does this mean that the
So does this mean that the engineers will likely be tweaking the PCB and we will see a new revision of it in future stock of the RX480?
Probably. But companies and
Probably. But companies and users will turn their focus on custom cards that will be offering at least an 8pin connector. So, even if they come up with another revision, people will keep avoiding the reference design. We are not talking hi end stuff here. A difference of $10-$20 between the reference and the custom model, will send everyone who knows what a GPU is, to the custom model.
I’ve never been a person to
I’ve never been a person to buy reference cards anyway.
It just seems like a big waste when you get far better performance and temperatures out of the third party cards.
I am personally excited to see how the ASUS STRIX RX 480 performs, as this really entices me to upgrade my ASUS STRIX R9 380 so I can have 60FPS max setting 1080P gaming (I just game on my 60 inch TV with my steam controller or xbox controller depending on the game).
So, Allyn, Ryan,
are you
So, Allyn, Ryan,
are you going to test a GTX 950 with NO power connector under overclocking and tell us if that card draws 85-90W from the PCIe bus – having no other power source to turn too – or we don’t want to spoil Nvidia’s image?
In TechPowerUp they measured 74W at defaults. After overclocking they got 20% extra performance. Performance costs in energy. It’s not free. 20% extra performance best case scenario means 20% extra power. That’s 90W from the PCIe bus.
did he not say they tested it
did he not say they tested it laready and it’s not as ”bad” as the rx 480 problem.
People where pointing at GTX
People where pointing at GTX 960 Strix as having problems. NOT GTX 950 with NO power connector.
I don’t know who was the moron who started all the fuss about the Strix. GTX 960 comes with a TDP close to 120W and also is equipped with a 6pin PCIe power connector. So even if it produces spikes, the average will always be much lower than the limits.
On the other hand GTX 950 with no extra PCIe power connector, is at it’s limits of 75W at normal frequencies. If the card was limited by the manufacturer, it would throttle and you wouldn’t get significant performance gains even after overclocking it. But at TechPowerUp they measured 20% extra performance, so the card is not limited. It will ask more power from the PCIe bus and it will get it.
The thing here is that tech sites are losing the chance to write an article and warn users, to educate users, that overclocking can push many cards outside the limits. While AMD messed up big time, many don’t realize that they are running their bus outside it’s limits, even without owning an RX 480.
What are you talking about?
What are you talking about? It’s 3% slower than the reference 950 in their Performance Summary and the average power consumption during gaming is 75w.
https://www.techpowerup.com/reviews/ASUS/GTX_950/21.html
75W at gaming. Overclock it
75W at gaming. Overclock it and you go at 90W. What is it that makes it difficult for you to understand? And don’t say it doesn’t go at 90W, because 20% performance doesn’t come free.
Yeah even the 750 ti used
Yeah even the 750 ti used stock 92w through that pci-e slot according to guru3d. Overclocked 950 and 750 ti w/o the damn 6pin would be 90-100w.
Now AMD is 79w on old 76w on new and 71 on compatibility and that is somehow an issue then. Ok.
750 Ti is a 60W TDP card. 90W
750 Ti is a 60W TDP card. 90W at peak isn’t something really serious. Typical power draw will be under or close to the 75W limit.
But in the case of GTX 950 with no power connector, that 90W will be constant, typical power draw, not just peak, from the PCIe slot. The GTX 950 with NO power connector will be stressing the pcie bus as much, if not more, as the RX 480, under overclocking.
They didn’t increase voltage.
They didn’t increase voltage. Does power consumption increase with the same voltage but higher clocks speeds?
And exceeding when overclocked is a completely different story than exceeding at stock.
Yes it does. And that’s also
Yes it does. And that’s also the case with CPUs. So if your motherboard is barely covering the TDP of your processor, don’t just go blindly and overclock it to sky high, thinking that not touching the voltage will be OK. You could end up with a dead motherboard.
If you search in google you will find mathematical types that try to roughly calculate the power consumption of a chip based on it’s frequency and voltage.
By increasing frequency you are increasing voltage linearly. This means that, roughly speaking, a 20% increase in frequency will also move the power consumption 20% up.
Of course when you are increasing voltage, things are much worst there with power consumption. An increase in voltage increases power consumption exponentially, so the power consumption in that case goes much faster higher.
Well it’s weird that in
Well it’s weird that in Techpowerup’s temperature testing was there no increase in temperatures with the overclock. It’s possible the fan speeds are targeted to maintain 69c but it’s unknown and assuming.
You’re making bold accusations with no solid evidence.
It’s also interesting w1z never made note of this in his review. Is he an Nvidia shill too?
And again, since you ignored it, overclocking power consumption is a whole different story. Nvidia got the most out of the PCI specification; what users decide to do with it is on them.
Good point about the
Good point about the temperature. In graphics cards fan speed will change based on temperature as we know. What could be different under OCing is the time it takes the card to reach that temperature and how much time it stays at that temperature. With no overclocking the card could be just touching that temperature and dropping really fast after the fans start spinning faster. Under overclocking the card could be staying much more time at that temperature. The cooling system is big enough to cope with the extra power consumption.
I am not making bold accusations. If there was a review proving me, everyone would be talking about obvious expectations. A card at 74W gets 20% extra performance under overclocking. It’s at least naive and wishful thinking to think that 20% more PERFORMANCE comes for free.
W1z is not obliged to start talking about what the card does under overclocking in details. He is just testing a card at defaults and adds a little overclocking in the mix, because people will want to see a page about overclocking. If he doesn’t put an OC page in his reviews he will get dozens of posts asking for it. But at the same time, no one is forcing him to repeat the entire review, with the card overclocked.
And please don’t end your post with a lie. From the first time I was saying that AMD messed up at defaults and that overclocking was a different matter. Allyn was ultra happy to agree with me back then.
See my post and his here
https://pcper.com/reviews/Graphics-Cards/Power-Consumption-Concerns-Radeon-RX-480#comments
That doesn’t change the fact that many will think that this is an RX 480 matter that doesn’t affect their hardware. Well, probably it does. People just don’t know it, and unfortunately with press not caring, they will never find out.
I have been going through a
I have been going through a lot of these posts, and it is making my brain hurt reading how people don’t understand that higher frequency -> more power draw -> more heat -> higher risk of instability.
This is just how it works.
This is why you need to get “OC Friendly” power supplies and motherboards.
How people can’t seem to put the two together doesn’t make sense.
The reference RX480s sipping too much power from the PCI-E lane at stock was a problem, and AMD corrected it via drivers.
However, nobody should be expecting to overclock ANYTHING on a budget motherboard and power supply, but they should be able to expect to run equipment at stock in the board.
That would be like me getting mad if I got a $50 budget motherboard and I tried a modest overclock on my CPU and the computer BSODs because it isn’t stable due to the increased power requirement.
It’s honestly a joke to me.
Overclocking is done at
Overclocking is done at user’s own risk. This is not standard and doesn’t have to be in spec. If anything burns out it is their fault.
You can add another 75 watts of power at least to Polaris with +50% power limit. Where would their numbers go. Through the roof. Nvidia’s are limited to +20% power at most over spec. I think most modern hardware has at least 25% overhead built in, if not more. 50% overhead maybe not.
Does the 950 hit 90 watts with standard boost? Polaris was over spec with standard boost.
Never said they are the same
Never said they are the same cases. In every post I say that AMD messed up.
But I am also saying that the press shouldn’t treat it as an isolated/unique case and inform users who overclock that maybe they are also going over specs. Most people would never think about power usage from the pcie bus when overclocking a graphics card. “Temps are fine, 3D Mark doesn’t crush, so everything is OK”. Well, maybe not.
I’ll agree with you here
I’ll agree with you here John. If they don’t get any artifacting . They deem it max overclock and OK. No one (reviewer) is paying attention as to whether they are over PCI express spec or not.
I usually don’t overclock my video cards anyway stock is usually good enough. Overclocking generally isn’t worth the extra 10% or so. You are shortening the lifespan of your components due to the excess heat from increased power draw.
Well nvidia’s cards are
Well nvidia’s cards are limited by tdp setting, which it quite precisely follows(feedback loop). I don’t know how w1zzard oc it, but if he did not touch tdp percentages, bios would try to restrict it to given stock power limit(nvidia overclocking is complicated as hell now-a-days). At least he said voltages during 1447MHz was 1.08V which is rather low. But yeah you are right reviewer should always warn people not to OC card, if doing so will seriously exceed pcie slot amperage specs.
BIOS in this case would NOT
BIOS in this case would NOT try to restrict the card, that’s why he gets 20% extra performance. No matter how complicated Nvidia’s overclocking is, there is one simple fact. You can’t get 20% extra performance with no extra power consumption. That 1.08v is necessary to bring power consumption down to 75W while running at the default clocks. W1z overclocks both the GPU and the GDDR5 to get that 20%. The card definitely gets over 80W, probably at 90W, based on the performance difference.
But Ryan will be happy to hide an Nvidia problem as much as happy to analyze an AMD problem. As I said in another post, Raja much be the most stupid person in the industry, giving exclusives to a site that is in bed with Nvidia.
Poor you, damage controlling
Poor you, damage controlling for AMD and not getting a cent for it
Busy few weeks since the launch of this card huh
Keep it up and soon you won’t be able to post here anymore because of a perma ban or IP ban you sad pathetic individual.
Well I guess you feel really
Well I guess you feel really great with your ignorance, so I will not try to spoil it. Enjoy it.
Talking about ignorant LOL
Talking about ignorant LOL Hypocrite much? Moron…
The 480 was drawing as high
The 480 was drawing as high as >50% over at STOCK settings. The 950 sure as hell didn't do that. Sure we can test it, but I'd imagine it couldn't be any worse than the 480 is post-fix. You just have to have AMD better than NV in every possible way in your own mind don't you?
Oh, spare me the lecture and
Oh, spare me the lecture and the hypocrisy. You where promising to check it the last time, remember? You where so happy that the AMD fanboy was giving NO excuses to AMD that you where promising to check it. Now you IMAGINE? Oh come on. We both know you are NOT going to do it, because probably I am right. A card that typically consumes 74W at defaults while playing, with peaks at 79W, and gives you 20% extra performance under overclocking it will NOT keep consuming 74W. It will jump at least 10-15W higher with peaks probably reaching 95-100W.
Yes, some people where pointing at the Strix card, a card with 120W TDP and an extra 6pin connector. What better card to show that Nvidia’s cards doesn’t go over the limits? I wonder who was the clever guy pointing at a 120W TDP card that was getting at least 150W power from two different sources as having the same problems with RX 480.
And no, when I was saying the last time that AMD messed up, I wasn’t trying to make AMD look better. And yes AMD’s card is out of limits at defaults and that’s why everybody blamed them, ME INCLUDED. But people will learn NOTHING from this. They will keep thinking that this was an AMD case that doesn’t affects them. At the same time their highly overclocked card could be sucking close to 90W power constantly from the PCIe bus while playing, not having anywhere else to turn to for the needed power.
Here John is Asus model from
Here John is Asus model from techpowerup. 950 consumes 79 watts maximum 4 watts over spec wow. I’d imagine average is way lower though. Even if 4 watts sustained, I don’t think it would be a big issue.
https://www.techpowerup.com/reviews/ASUS/GTX_950/21.html
All of the 950 cards I’ve come across are partner cards. How would liability come back to Nvidia.
Why even bother with this
Why even bother with this moron anyway. He is just that ignorant butthurt fanboy that won’t let go. Probably has multiple accounts on other tech sites doing the same shit.
NVidia pays people to troll
NVidia pays people to troll tech sites. Maybe NVidia finds fans who are brain dead and easily controlled.
When DX12 games come out in numbers in the next 6 months, NVidia trolls will cry. Software cheating tricks will not help NVidia fans with DX12 games.
It is going to be a very interesting situation.
Nvidia “politicians” pay
Nvidia “politicians” pay people to troll tech “web” sites. Maybe Nvidia “politicians” find fans “sheeple” who are brain dead and easily controlled.
Fixed that there for yah. People are waking up, but not fast enough.
In my view, the 1060 and this whole “FE” jazz is absolutely garbage and a way to soak users for another 50$. But, it’s working. See above about “Sheeple”. Nvidia has a superior product, for sure. And they are run better/more professionally. But, come on, swing for the little guy every once in a while! It really builds up $ex appeal! If they had priced this around 25$ lower and left the clocks down so it competed directly with the 480 but could OC to the MOON, they could have a really awesome product that generated a ton of buzz.
These NVidia fans are liars.
These NVidia fans are liars. They are not trust worthy. Here is why?
“MSI GeForce GTX 1080 DirectX 12 GTX 1080 SEA HAWK EK X 8GB 256-Bit GDDR5X PCI Express 3.0 x16 HDCP Ready SLI Support ATX Video Card”
$809.99
“OUT OF STOCK PNY GeForce GTX 1080 Founders Edition 8GB GDDR5 PCI Express 3.0 Graphics Card VCGGTX10808PB-CG”
$699.99
I also suspect NVidia marketing people prey on gullible NVidia fan boys.
How about XFX RX 480 over
How about XFX RX 480 over clocked editions? Did anyone test them? Any links?
Now I’m curious how the power
Now I’m curious how the power phases in other cards are wired up. The r9 390 and gtx 960 didn’t show nearly as much power fluctuation from the PCIe connections like the RX 480 did. Their graphs were practically flat while the external power connector picked up the rest of the load.
From what I’ve heard from
From what I’ve heard from people doing hardware voltmods, usually VRAM is wired up to PCI-E and pretty much everything else usually takes power from 6/8pin connectors.
And based on the Allyn’s
And based on the Allyn’s picture the RX 480 looks to be the opposite with VRAM being powered by the 6-Pin connection.
Glad to see AMD has mitigated this power issue. My 480 is expected to arrive Friday to replace a 6870.
In my opinion, AMD’s PR
In my opinion, AMD’s PR initially created that power misconfiguration issue, hoping (or, already having the needed reviewers in place) that some reviewers would have spotted it and, consequently, fired the alarm to the masses. After such events happened, the PR came out reassuring everybody that they would fix it soon, and indeed did they fix it. Providing the fix creates a positive effect that the red team is really caring about their audience; that we are of an outermost importance to them. But what actually was happening is AMD “programming” the multitude’s minds. They are trying to have more folks joined their bandwagon; trying to pull, figuratively speaking, the “market blanket” to their side.
a lot of people canceled
a lot of people canceled their rx480 orders after hearing about this issue?
Nope, still sold out in most
Nope, still sold out in most parts of the world. 🙁
I was going to save money and
I was going to save money and just upgrade to a RX 480. I bought a R9 270 expecting to upgrade 6 months later, but ended up putting it off more and more until this last generation of GPU’s. The 480 would have been perfectly fine for what I need, especially at $200, but after the whole power thing, I decided to go with the GTX 1070. I know I’m not the only person that did the same.
I’ve always bought AMD since I started building PC’s in 2010, but now I’m going to be Green and Blue for the first time ever =/.
I can assure you, AMD isn’t dumb enough to do what you’re suggesting.
Please tell me this is
Please tell me this is sarcasm if not then I’ll say that’s sad
How is overclocking the RX
How is overclocking the RX 480 going to affect the power draw of the Rx 480 card? One site found a 122 watt average 12 volt power draw from the 71.28 Watt PCI 12 volt max spec while running a benchmark at a minor 1300 overclock. The new drivers and compatibility mode leaves scant room for any more power draw from the reference card. Custom cards will be running even more to the edge with factory overclocked cards and will need to do a lot more than just let the users run the new AMD driver. They will need to rewrite the Video BIOS at a minimum and balance the cards with more precision. It is not heat that is the 480’s limiting factor in overclocking, it is the power needed to do it.
You don’t buy reference cards
You don’t buy reference cards for overclocking, not if you’re serious about it. Much less AMD reference cards. Now I’m just glad this driver fix will make it next to impossible for AIBs with better 2x6pin, 8pin and 8+6pin setups to go over spec. So this is essentially a non-issue.
Phew. The internet can rest
Phew. The internet can rest again! And hugely thanks to you guys! Cheers!
While I’m not concerned about the PCIE power draw and I’m not even interested in a reference 480, it is something that I will take into consideration when buying a card in the next few weeks and I’ll be keeping on eye on how partner 480’s handle it.
It’s about peace of mind for me and not having to be concerned with it. The larger the margin the better. Not much different than how everyone runs their GPUs and Intel CPUs at 70c when they can run at 90c and above.