[H]ard|OCP has had more time to spend with their reference GTX 980 and have reached the best stable overclock they could on this board without moving to third party coolers or serious voltage mods. At 1516MHz core and 8GHz VRAM on this reference card, retail models will of course offer different results; regardless it is not too shabby a result. This overclock was not easy to reach and how they managed it and the lessons they learned along the way make for interesting reading. The performance increases were noticeable, in most cases the overclocked card was beating the stock card by 25% and as this was a reference card the retail cards with enhanced coolers and the possibility of custom BIOS which disable NVIDIA's TDP/Power Limit settings you could see cards go even faster. You can bet [H] and PCPer will both be revisting the overclocking potential of GTX 980s.
"The new NVIDIA GeForce GTX 980 makes overclocking GPUs a ton of fun again. Its extremely high clock rates achieved when you turn the right dials and sliders result in real world gaming advantages. We will compare it to a GeForce GTX 780 Ti and Radeon R9 290X; all overclocked head-to-head."
Here are some more Graphics Card articles from around the web:
- GeForce GTX 980 cards from Gigabyte and Zotac @ The Tech Report
- Palit GTX980 Super Jetstream OC @ Kitguru
- The NVIDIA GTX 980 SLI Review @ Hardware Canucks
- Gainward Phantom GeForce GTX 970 4GB @ eTeknix
- MSI GeForce GTX 980 Gaming 4 GB @ techPowerUp
- NVIDIA GeForce GTX 980M & GTX 970M Preview @ Hardware Canucks
- NVIDIA GTX 970 SLI Performance Review @ Hardware Canucks
- NVIDIA GeForce GTX 980 Dominates With OpenCL On Linux @ Phoronix
- Sapphire R9 270X Toxic Vs NZXT Kraken Cooling @ eTeknix
- Raijintek Morpheus GPU Cooler @ eTeknix
- Arctic Accelero Hybrid II-120 Liquid GPU Cooler @ Kitguru
- AMD Radeon R9 285 Tonga Performance On Linux @ Phoronix
- Gigabyte AMD Radeon R9 285 WindForce OC Video Card Review @ Madshrimps
- HIS R9 290X iPower IceQ X2 Turbo 4GB GDDR5 Video Card Review @ Madshrimps
- Sapphire Radeon R9 285 ITX Compact OC Review @HiTech Legion
- XFX R9 280 Double Dissipation 3GB @ [H]ard|OCP
We’ve been flooded with
We’ve been flooded with GTX980/970 news and reviews from everywhere. I have a few points to make about Nvidia’s new cards, a few points that haven’t been made and haven’t been addressed by any site.
First of all, The new cards are barely-to-not faster than AMD’d 290 series cards, and are currently far more expensive with no free game bundles.
Secondly, the acclaimed power efficiency is not the product of the “Maxwell micro-architecture or architecture”. It’s the result of using a more advanced and less leaky 28nm process node from TSMC.
AMD has been using the first iteration of TSMC’s 28nm process, whereas Nvidia being late with their Kepler chips, they fabricated their Kepler chips using a more polished 28nm HPP+ node from TSMC, which since then has improved dramatically leakage and performance wise.
This story was mentioned by Charlie Demirjian back in 2012.
Most people don’t understand the basic concepts of computer logic design. Two chips that are very similar complexity wise, 436mm^2 for Hawaii vs 416mm^ for GM204, cannot have such variance in power usage unless the underlying fabrication silicon is very different.
this might be the most
this might be the most pathetic thing I’ve ever read on the internet.
I agree with ya reading that
I agree with ya reading that other idiots comment. Power usage can sure be that different, Its not just about node of TSMC. Maxwell chip has about 800 less cuda shaders then 780ti had. Also since they shrink the memory bus by 30% that also decreases power draw since they can less ram chips, and use ram chips that can hold more. There is a lot more that helps with it.
… doubled
… doubled
Oh? how about this:
“Hurr
Oh? how about this:
“Hurr durr ShitMD have the lamest drivers even though I haven’t used AMD GPUs in many years and couldn’t possibly know whether it’s true or not.”
or
“Hurr durr ShitMD GPUs consume sooo much more power than Nvidia’s that make my power bill cost BILLIONS more per month even though I live with my parents and don’t pay the power bill anyways.”
Sounds like an AMD fanboy,
Sounds like an AMD fanboy, with hurt feelings.
Lets talk about transistor
Lets talk about transistor count
Full Hawaii = 6.2 B
Full GM204 = 5,2 B
Performance difference is known
Full Tonga = about 5 B
Full GM204 = 5,2 B
Performance difference would be for laughs
Does underlying fabrication matters in transistor count?
hehehe
The slight difference in
The slight difference in transistor count doesn’t change the point I’m putting out and is not a counter argument. Those two chips are built on two different fabrication processes and that’s why their power characteristics are very different.
The extra transistors of Hawaii are spent on features such as double precision support at 1/4 its single precision rate, programmable DSP and display control logic for adaptive sync. The GM204 lacks all these features and has a half as wide memory controller.
The performance of the GTX 980 is somewhere around the 290X despite clocking much higher on load.
So all in all, the Maxwell is not the magic they’re touting us. IT’s a chip that’s has the same performance/mm^2 as Hawaii, and had it been built using the same 28nm HPP process AMD’s been using since 2012, it would have has a very similar in my estimation power rating.
Worse load of dribble has
Worse load of dribble has never came out of a fanboi’s mouth before. Well done for being the first!
The comment section of
The comment section of PCperspective is full of Nvidia’s shills and reasoning with your ilk isn’t the easiest of task at all.
I presented you with a multi-point argument and you didn’t even mention any of the points in your reply.
If you can’t refute anything I said then you just need keep silent, instead of embarrassing yourself, or rather exposing yourself in this case.
Oh, I just ordered a pair of
Oh, I just ordered a pair of R9 290s for my LAN rig, I’m such an NVidia Shill!
It sucks being a fanboi, doesn’t it? That you can’t be happy whenever your favorite team is behind.
I’m not in the business of feeding the trolls. If and when I ever get into it though, I’ll come back and refute all the rubbish you’ve left behind!
I don’t give a rat a** about
I don’t give a rat a** about that. All those petty claims are not an argument. calling my argument pathetic without pointing out why and how is not an argument and neither is making those petty claims, which are lies meant to avoid facing the argument put forward.
I’m gonna list the main points I made in my argument:
1- Maxwell is not the magic Nvidia is touting us. Maxwell’s performance/mm^2 is the same as that of GCN1.1 and it’s more power savvy because it’s fabricated on a different less leaky 28nm process node.
2- The GTX980 at $550 is far more expensive than the 290X at $400 with free three game. It’s not a better buy in by any logic.
3- Lastly, I don’t care what you are and whose fan you are. When presented with an argument you either argue you GTFO.
I don’t give a rat a** about
I don’t give a rat a** about that. All those petty claims are not an argument. calling my argument pathetic without pointing out why and how is not an argument and neither is making those petty claims, which are lies meant to avoid facing the argument put forward.
I’m gonna list the main points I made in my argument:
1- Maxwell is not the magic Nvidia is touting us. Maxwell’s performance/mm^2 is the same as that of GCN1.1 and it’s more power savvy because it’s fabricated on a different less leaky 28nm process node.
2- The GTX980 at $550 is far more expensive than the 290X at $400 with free three game. It’s not a better buy in by any logic.
3- Lastly, I don’t care what you are and whose fan you are. When presented with an argument you either argue you GTFO.
Watch out everyone! We’ve got
Watch out everyone! We’ve got a badass troll over there!
rofl they are both using 28nm
rofl they are both using 28nm procces, learn a bit before making the dumbest statements possible
Can you point to any public
Can you point to any public information on what process revision AMD and Nvidia’s are using? I doubt this is public, but it is somewhat naive to think that TSMC would have left the process tech exactly the same for ~3 years, as a lot of people seem to think. They are going to be continuously tweaking the process for better yield and better performance. Also, they obviously have multiple variants of the 28 nm node. I assume more advanced versions command price premium though. With the 20 nm production, Apple seems to have bought up all of TSMCs capacity. Nvidia may be paying a premium for slightly better process tech; this could justify their higher price at the consumer level to some extent.
I assume a large part of nvidia’s power consumption advantage is going narrower with increased clock speed to make up performance. This is obviously a good way to go if the process tech allows pushing the clock that high without the power consumption getting out of control. I would expect that some 20 nm capacity would be coming available, but it is unclear when we will actually be seeing it for gpus. The defect levels may still be too high for large chips. We may see the smaller, mid-range gpus on 20 nm first. The switch to 20 nm is when we will see the real next generation performance. I am assuming that AMD’s R9 390 will still be the same 28 nm process they are currently using though, and the mid-range release after that will be on 20 nm. Nvidia may have the power consumption advantage until 20 nm in that case.
Unfortunately, You can’t switch process node arbitrarily; targeting a significantly higher clock speed requires design changes. AMD may have made a decision to target 20 nm rather than spend resources on design for a more optimized 28 nm node. A lot of people do not realize the lead times involved. Such a decision would have been made a long time ago. Personally, I value stability over performance, so I would tend to not over clock. Without over clocking, AMD has better price/performance at the moment on the desktop. If in the market for a gaming laptop, I would definately consider Nvidia 980M models.
“It’s the result of using a
“It’s the result of using a more advanced and less leaky 28nm process node from TSMC.
AMD has been using the first iteration of TSMC’s 28nm process, whereas Nvidia being late with their Kepler chips, they fabricated their Kepler chips using a more polished 28nm HPP+ node from TSMC, which since then has improved dramatically leakage and performance wise.”
Citation needed. The R9 290 GPUs arrived after Kepler both GK104 and even GK110. The GK110 was available in Tesla cards in late 2012. While the 290/x were not available until Q4 2013. So how is Kepler late relative to Hawaii? Why with Hawaii arriving so much later after Kepler would they use leaky 28nm? I think you’ll need more than your say so (which comes across as musings of a fanboy) when you make claims that the architecture isn’t playing a role. Also when comparing costs you’re completely forgetting and discounting the 970. I’ll give you the 980 is overpriced, but trying to make that claim about the 970 as well since you said “new cards are…and are currently far more expensive” is laughable.
Let’s not forget that
Let’s not forget that Nvidia’s performance advantage comes mostly from its internal bandwidth compression, a feature that AMD Tonga did first. Pirate Islands (AMD’s next high-end GPU) will also use bandwidth compression alongside heavily modified GCN architecture and possibly even a new process node. They know they aren’t competing against the 980, but rather the 980Ti. 2015 is going to be awesome for gamers.
To all those people rushing
To all those people rushing to buy the 970 and 980, you’re almost certainly being screwed. The real performance part is the 980Ti / GM200. So wait for AMD to release their successors to the 290/290X and then Nvidia will see how well AMD’s parts perform and then spec and clock the 980Ti to perform about 5% faster than AMD’s fastest part and thus take the “performance crown” but also at a huge premium.
They always do this.
I just can’t understand why gamers are so excited about the 970 and 980, especially about how they’re “trading blows” with the 290 and 290X. “Trading blows” with previous-gen cards is pathetic. Remember when new cards launched their performance was well clear of previous-gen cards and in a league / numerical range of their own? Pepperidge farm remembers. Gamers had standards back then. Now they drool over anything new. Efficiency is all well and good, but I want new and unprecedented levels of performance. This is a desktop, not a MF’ing laptop.
The 970 and 980 are mostly price drops.
The 980 more likely the 970
The 980 more likely the 970 less so, if the 980ti/r9 390 comes it will be going for the high end $549 to $700 price range crown the 970 will less likely be affected, the 980 will probably stabilize low to mid $4XX. But thats a quarter or two away.
Not sure what you mean by
Not sure what you mean by “trading blows”? If you read the HardOCP article you will see that when both an aftermarket R9 290X and a reference GTX 980 are overclocked it is not even close, the GTX 980 wins hands down:
The GTX 980 is faster @ 2560×1440 in:
BF4 – 31%
Watch Dogs – 36%
Crysis 3 – 16%
Far Cry 3 – 26%
Tomb Raider – 18%
At 4K it still wins easily:
BF4 – 25%
Watch Dogs – 39%
Crysis 3 – 8%
Far Cry 3 – 30%
Tomb Raider – 8%
I would call this a knockout punch.
Well first, the test has
Well first, the test has several flaws to begin with: since the card just launched, the reference GTX 980 is likely a press sample card (as opposed to a retail card) while the 290X likely isn’t. And press samples are handpicked and have much better binning than retail cards.
Second, the 290X being aftermarket and the 980 being reference doesn’t say much as Nvidia, unlike AMD, ships their reference cards with good coolers that are pretty much as good as aftermarket ones. So if the comparison was aftermarket to aftermarket, I would expect any gains in performance for the 980 to be modest at best.
Third, they didn’t choose the right aftermarket card for the 290X. The Sapphire R9 290X Tri-X is both better cooled, better performing and cheaper than the Asus DCU II.
All this considered, I would not call comparative performance that’s in the lower double digits, with several figures being in the single digits a “knockout punch”.
But don’t get me wrong, I ~want~ Nvidia to knockout punch AMD, so that AMD can knock them out right back. In my dictionary, a knockout punch is an increase in the high double digits, think 80-100%.
I wish Nvidia, at least for the desktop parts, disregarded efficiency, and went for the maximum performance at the 780Ti’s power draw levels.
As a side note, power efficiency wasn’t a problem before Maxwell, and greater power efficiency wasn’t something than any desktop gamers were asking for. What this is, is an example of how investment in mobile is taking away investment and resources from traditional PC gaming and desktop GPUs. I think it is as noxious to PC gaming as console gaming is, if not more so.
But back on topic: the problem is that Nvidia delivers only as much value to gamers as they can get away with and as little as puts them a tiny bit ahead of AMD. And the bigger problem is that Nvidia fans don’t call them out on it but instead buy into whatever marketing hype Nvidia sells, and get excited about gimped GPUs with single digit to low double digit performance increases when they should be demanding high double digit performance increases for the benefit of all PC gamers, both on AMD and Nvidia GPUs.
TL;DR:
Nvidia is giving you scraps and breadcrumbs and you’re acting like it’s a succulent feast and as a result of this all gamers, whether with Nvidia or AMD GPUs, lose out.
The Hard OCP article makes me
The Hard OCP article makes me think these cards would significantly benefit from some active cooling. For the sake of my computer though, it is lucky I can’t afford to slap together a sub-roomtemp water cooling setup and hang it upside down so condensation hopefully runs away from it.
I was close to pulling the
I was close to pulling the trigger on two GTX 970s for SLI but after reading about the stunning, and I mean stunning, performance of the reference GTX 980 when overclocked compared to the overclocked competition, I am not sure what to do.
It’s only $150 less for the Gigabyte G1 GTX 970 compared to a reference GTX 980 in Australia, but the GTX 980 is significantly faster when both are overclocked.