The ASUS GTX 980 Ti STRIX DCIII OC comes with the newest custom cooler from ASUS and a fairly respectable factory overclock of 1216MHz, 1317MHz boost and a 7.2GHz effective clock on the impressive 6GB of VRAM. Once [H]ard|OCP had a chance to use GPUTweak II those values were increased to 1291MHz, 1392MHz boost and a 6GB VRAM clock with manual tweaking, for those who prefer automated OCing there are three modes which range from Silent to OC mode that will instantly get you ready to use the card. With an MSRP of $690 and a street price usually over $700 you have to be ready to invest a lot of hard earned cash into this card but at 4k resolutions it does outperform the Fury X by a noticeable margin.
"Today we have the custom built ASUS GTX 980 Ti STRIX DirectCU III OC 6GB video card. It features a factory overclock, extreme cooling capabilities and state of the art voltage regulation. We compare it to the AMD Radeon R9 Fury, and overclock the ASUS GTX 980 Ti STRIX DCIII to its highest potential and look at some 4K playability."
Here are some more Graphics Card articles from around the web:
- EVGA GTX 980 Ti Classified ACX 2.0+ @ Kitguru
- Gigabyte G1 Gaming GTX 980Ti 6GB @ eTeknix
- Colorful iGame GTX 980 Ti 6GB @ techPowerUp
- MSI GTX 980 Ti Lightning Review @ OCC
- PNY GTX 980 XLR8 Review @ OCC
- MSI GeForce GTX 950 Gaming 2 GB @ techPowerUp
- GTX 780 Ti vs R9 290X; The Rematch @ Hardware Canucks
- ARCTIC Accelero Hybrid III-140 vga cooler @ HardwareOverclock
- AMD Linux Graphics: The Latest Open-Source RadeonSI Driver Moves On To Smacking Catalyst @ Phoronix
- Running The AMD Radeon R9 Fury With AMD's New Open-Source Linux Driver @ Phoronix
- HIS R7 360 iCooler OC 2GB Video Card Review @ Madshrimps
- PowerColor Radeon R9 380 PCS+ Graphics Card Review @ Techgage
- Tiny Radeon R9 Nano to pack a wallop at $650 @ The Tech Report
Nope. Not even close.
Zotac’s
Nope. Not even close.
Zotac’s AMP Extreme is the best 980 Ti currently (if you don’t count in Colorful’s monstrosities).
While I suspect a typo, I
While I suspect a typo, I would be very excited for 8GB of VRAM from software tweaks!
dang it, fixed. But maybe
dang it, fixed. But maybe Sebastian could teach you how to add that extra 2GB
Or just buy a Radeon. %)
Or just buy a Radeon. %)
If you are only going to use
If you are only going to use the GPU for games then Nvidia may make some little sense at the Moment, but if you want to do more with the future Vulkan/DX12 based games then AMD’s asynchronous compute appears to be the way to go, if you want a GPU for more than gaming, and now for gaming too. Nvidia’s attempt at product segmentation and the striping out more of the asynchronous compute ability from its SKUs has backfired now that the games makers and the graphics APIs are able to make more use of this hardware asynchronous compute ability.
I’m sure that Nvidia’s short sighted marketing department took great advantage of Nvidia’s power usage advantage at the expense of compute, but now the entire gaming software stack is starting to move towards Utilizing all of AMD’s available hardware compute/graphics abilities included in the hardware in the ACE’s and their hardware based asynchronous compute ability.
So now not just games but other software applications as well will benefit from AMDs GPUs. Nvidia’s marketing made its bed on hawking its products to only gaming interested consumers, and now that hardware based asynchronous compute has shown to also be of an advantage for gaming, Nvidia is left to lie in that bed. Better get some new tapeouts on the way Nvidia, because the gaming software and graphics APIs are not going back to the pre-hardware based asynchronous compute days.
It’s not going to ever again be just a matter of doing it all on the CPU compute wise for gaming, a whole lot more of that non graphics gaming compute is also going to be done on the newer GPUs with that hardware asynchronous compute ability running more than just graphics type operations. Gaming physics and lighting require a lot of parallel types of computation that can be done on the GPU more efficiently in addition to just pure graphics computation.
This is a GAMING card. Your
This is a GAMING card. Your observations may or may not be correct, but it has nothing to do with this piece of hardware.
And gaming engines will be
And gaming engines will be using all of the asynchronous compute ability that is available, with much more done on the GPU, and much less done on the CPU. Say goodby to the latency inducing motherboard CPU for discrete gaming as the gaming engine/graphics API software stack takes advantage of all that GPU number crunching ability. Who needs those 8 CPU cores when there will be thousands of GPU cores available for game engines to utilize. Let the motherboard CPU do its janitor’s job of keeping the OS/bloatware in order while the GPU runs the games, and discrete GPUs will be getting some more CPU like abilities going forward, and may even get a few on die CPU cores to reduce latency to a minimum.
There will be some exceptions for the motherboard gaming APUs that may be constructed of CPUs/GPUs/HBM on an interposer that will have more beefy discrete like GPUs with plenty of ACE units and the CPU cores wired up in a wider more direct manner to the GPU by thousands of traces wide interconnects etched out on the interposer’s silicon substrate. Expect the CPUs on these APUs on an interposer to be further integrated with the GPU with even more HSA compute ability offloaded to those thousands of GPU cores. GPU Hardware based asynchronous compute ability is the future of gaming.
So all this NON Hardware based “asynchronous compute” ability kit will be quickly placed at a disadvantage, and it only runs using the outdated API feature set’s limited hardware support. Enjoy waiting for the draw call to complete to be able to do any “asynchronous” work, and that’s is either graphics or compute, but not both at the same time. Vulkan and DX12 will be doing things asynchronously from now on. Better get your hardware in order Nvidia!
Short sighted?
And yet, AMD
Short sighted?
And yet, AMD had a CPU plan that was based on games using multiple, less efficient cores. How did that go?
Future proofing existing technology while gimping current plans isn’t necessarily the best way to go either.
NVidia is profitable. AMD is not. Perhaps NVidia actually has a good plan?
There will always be NEW CARD designs as well.
*Finally, it’s not like DX12 is going to suddenly make DX11 pointless. Not only do people continue to play and buy older games where NVidia is MUCH better at DX11 but also DX12 games aren’t going to be the majority of games sold for a long time.
It’s a bit premature to be
It’s a bit premature to be drawing those conclusions.
I’d like to see what unfolds and would like to see updated results following Kollock’s latest update (which came on the Friday before Labor day).
If and only if nVidia’s proposed solutions continue to display inefficacy will there be a concern on long-term longevity.
Considering the same Kepler whites papers are referenced for Maxwell, then whatever solution nVidia formulates will apply across the product line – better or for worse :p
I’d like to see that before making any judgements.
But how does it handle ASYNC
But how does it handle ASYNC shaders?
It doesn’t matter, by the
It doesn’t matter, by the time the next gen is out, nvidia will make you give them money via driver emulation.
“Asynchronous Shaders? What
“Asynchronous Shaders? What is that?” (c) Huang
“Async Compute? Nope, never heard of it.” (c) Maxwell