Specifications
The new GTX 980 Ti is here and as we found out, can match the performance of the TITAN X for a much lower price of $650!
When NVIDIA launched the GeForce GTX Titan X card only back in March of this year, I knew immediately that the GTX 980 Ti would be close behind. The Titan X was so different from the GTX 980 when it came to pricing and memory capacity (12GB, really??) that NVIDIA had set up the perfect gap with which to place the newly minted GTX 980 Ti. Today we get to take the wraps off of that new graphics card and I think you'll be impressed with what you find, especially when you compare its value to the Titan X.
Based on the same Maxwell architecture and GM200 GPU, with some minor changes to GPU core count, memory size and boost speeds, the GTX 980 Ti finds itself in a unique spot in the GeForce lineup. Performance-wise it's basically identical in real-world game testing to the GTX Titan X, yet is priced $350 less that that 12GB behemoth. Couple that with a modest $50 price drop in the GTX 980 cards and you have all markers of an enthusiast graphics card that will sell as well as any we have seen in recent generations.
The devil is in all the other details, of course. AMD has its own plans for this summer but the Radeon R9 290X is still sitting there at a measly $320, undercutting the GTX 980 Ti by more than half. NVIDIA seems to be pricing its own GPUs as if it isn't even concerned with what AMD and the Radeon brand are doing. That could be dangerous if it goes on too long, but for today, can the R9 290X put up enough fight with the aging Hawaii XT GPU to make its value case to gamers on the fence?
Will the GeForce GTX 980 Ti be the next high-end GPU to make a splash in the market, or will it make a thud at the bottom of the GPU gene pool? Let's dive into it, shall we?
GTX 980 Ti Specifications
If my introduction didn't give the whole story away, this table likely will. The GeForce GTX 980 Ti looks almost indistinguishable from the GTX Titan X anywhere that matters.
| GTX 980 Ti | TITAN X | GTX 980 | TITAN Black | R9 290X | |
|---|---|---|---|---|---|
| GPU | GM200 | GM200 | GM204 | GK110 | Hawaii XT |
| GPU Cores | 2816 | 3072 | 2048 | 2880 | 2816 |
| Rated Clock | 1000 MHz | 1000 MHz | 1126 MHz | 889 MHz | 1000 MHz |
| Texture Units | 176 | 192 | 128 | 240 | 176 |
| ROP Units | 96 | 96 | 64 | 48 | 64 |
| Memory | 6GB | 12GB | 4GB | 6GB | 4GB |
| Memory Clock | 7000 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 5000 MHz |
| Memory Interface | 384-bit | 384-bit | 256-bit | 384-bit | 512-bit |
| Memory Bandwidth | 336 GB/s | 336 GB/s | 224 GB/s | 336 GB/s | 320 GB/s |
| TDP | 250 watts | 250 watts | 165 watts | 250 watts | 290 watts |
| Peak Compute | 5.63 TFLOPS | 6.14 TFLOPS | 4.61 TFLOPS | 5.1 TFLOPS | 5.63 TFLOPS |
| Transistor Count | 8.0B | 8.0B | 5.2B | 7.1B | 6.2B |
| Process Tech | 28nm | 28nm | 28nm | 28nm | 28nm |
| MSRP (current) | $649 | $999 | $499 | $999 | $329 |
Let's first start at the GPU count where the GTX 980 Ti hits 2,816 CUDA cores, 256 fewer than the GTX Titan X, a drop of about 9%. Along with that goes a drop of texture units to 176 from 192. The ROP count and 384-bit memory bus remain the identical though, providing the same amount of memory bandwidth as the $999 flagship card offering from NVIDIA.
Block Diagram of the GM200 inside the GTX 980 Ti - Two SMMs disabled
You will notice a drop from 12GB of memory to 6GB for this card, which is not surprising. In all honesty, for gamers, the 12GB of memory was always fools gold, never affecting game play even in the most extreme configurations; 4K Surround (triple monitors) would likely be unable to utilize it. At 6GB, the GTX 980 Ti becomes the highest default memory capacity for any reference card other than the Titan X. For even the most demanding games, at the most demanding resolutions, 6GB of memory will be just fine. Our test settings at 4K for Grand Theft Auto V, for example, are estimated to use about 4.9GB; and that's the highest we use for any current title. And with all the rumors circulating about AMD's Fiji being stuck at 4GB because of the HBM implementation, expect NVIDIA to tout and advertise that advantage in any way it can.
Even though the rated base clock of the GTX 980 Ti is the same as the Titan X at 1000 MHz, in my experience with the two graphics cards the 980 Ti does seem to hit 40-70 MHz higher Boost clocks over extended gaming periods. As you'll soon see in our benchmarking, this is likely the biggest reason that despite the supposed disadvantage in CUDA core count, the GTX 980 Ti and the Titan X are basically interchangeable in gaming capability. I also suspect that some changes to the GPU power delivery help that a bit, as indicated by a 92C (versus 91C) rated thermal limit for the GTX 980 Ti.
In terms of TDP, the GTX 980 Ti is rated at 250 watts, the same as the GTX Titan X and 40 watts lower than the R9 290X. In reality, through our direct graphics card power consumption testing, the GeForce cards err on the side of caution while the AMD Radeon R9 series swings the other direction, often running hotter than the TDPs would indicate. The GTX 980 Ti does use the exact same cooler design and even the same fan curve implementation (according to NVIDIA) so sound levels and cooling capability should be identical, yet again, to the Titan X.






Might want to fix the
Might want to fix the typo:
“..the GTX 780 Ti finds itself in a unique spot in the GeForce lineup”.
I think it’s supposed to be the 980 🙂
Fixed!
Fixed!
Ryan,
You do realize The
Ryan,
You do realize The Witcher 3 and Project Cars had GameWorks. Saying their driver development is miles a head because of that is rather silly and irresponsible. Especially given how the developr statements and patches for both have panned out.
Those aren’t really the only
Those aren't really the only two examples though. GTA V, for example.
I think we are going to give more into this with some vendor interviews in the near future.
The explanation you gave
The explanation you gave referred to having optimized drivers on release to GameWorks backed titles. Silly to make such statement given the obvious nature of the business.
Same kind of silliness would be to imply that due to AMD having a driver out on release for a AMD Gaming Evolved backed title translates into being miles ahead.
I hope you ask how come console optimization on GCN isn’t translated given how much time is spent on X-Box and PS4 development.
I also hope you don’t get a PR spin like usual from either side.
The big difference is that
The big difference is that nvidia had drivers in the day-0 for Gaming Evolved titles, and AMD hadn’t them in the launch of TWIMTBP titles.
Hell, AMD hadn’t proper drivers many times with the launch of many Gaming Evolved games and related.
So, don’t be silly and stop to see “sillyness” in the Ryan’s review because he said a fact:
Drivers that supports games in the launch, the best, nvidia.
Ex:
Battlefield 3 and 4 (they aren’t literally GE titles, but they are more than this, Mantle version, 8 millions of dollars, Johanson making PR to AMD, etc), Bioshock infinite, Civ:Beyond earth. And many more.
Exactly, Nvidia is going to
Exactly, Nvidia is going to release Day 0 or Day 1 drivers for ANY big game launches regardless of branding because in the end, they know their users demand this level of support.
It is almost as if AMD is spiting their own customers in an attempt to make Nvidia/GameWorks look bad, when in the end, it just hurts their own customers.
Now that’s what you call
Now that’s what you call nvidiot shilling…
I think you need to look in
I think you need to look in the mirrer mate. Its not Nvidia’s fault that AMD didn’t have a driver out in time for Project Cars release.
This is from Slightly Mad Studios
“Project CARS is not a GameWorks product. We have a good working relationship with nVidia, as we do with AMD, but we have our own render technology which covers everything we need.”
“NVidia are not “sponsors” of the project. The company has not received, and would not expect, financial assistance from third party hardware companies.”
“We’ve provided AMD with 20 keys for game testing as they work on the driver side. But you only have to look at the lesser hardware in the consoles to see how optimised we are on AMD based chips.
We’re reaching out to AMD with all of our efforts. We’ve provided them 20 keys as I say. They were invited to work with us for years, looking through company mails the last I can see they (AMD) talked to us was October of last year.
Categorically, Nvidia have not paid us a penny. They have though been very forthcoming with support and co-marketing work at their instigation. We’ve had emails back and forth with them yesterday also. I reiterate that this is mainly a driver issue but we’ll obviously do anything we can from our side.”
So AMD can’t get a driver out for the release of the game. Nvidia spends more money on driver development for their game ready drivers.
This is a reason to buy an Nvidia card. Reviewers should mention this. Fanboys shouldn’t go crazy at reviewers for mentioning it.
So if AMD didn’t get advance notice how come they already have a new driver release available? The issue is it should have been ready the day before release date.
Nvidia don’t need to “pay”
Nvidia don’t need to “pay” directly… Dev support & free marketing/kit are enough. Having code/drivers tuned together is what nvidia banks on. Indeed they flagged their fear of AMD doing exactly this in an article by Anand a number of years ago. AMD dind’t take this step, but nvidia is paranoid (like Intel), so they did. Can’t blame them with AMD locking up gaming consoles. AMD only had access to the Cars RTM a month before drop. Nvidia code lockout deals & AMD not spending on dev outreach drives current situation.
So why doesn’t AMD offer
So why doesn’t AMD offer similar support? Why was their last correspondence to a game developer last October? AMD for a long time has been behind on driver support. That’s why they pushed mantle so hard. It would move the burden away from AMD and on to the developers. If you give developers access to the bare metal you don’t need specific drivers.
The only issue is we are not that yet, as result AMD’s poor driver support is being shown up. nVidia spends a boat load on their drivers and Geforce experience. In the free market you get rewarded for effort and punished if you don’t keep up with your competition. Sadly the lack of competition in graphics means this is only going to happen more and more often.
Pcars was delayed so of cause they only got the RTM just before release, but if you read the quotes from Slightly Mad Studios gave access to the game as they were developing it however AMD didn’t communicate back to the studio .
Lack of money one
Lack of money one assumes.
The developers (of the game) still hold responsibility for the performance of their game, blaming others is a cop-out. To me as a non-game developer it seems more than a little ridiculous that every game (written to supposed standard) requires driver customization.
“NVidia are not “sponsors” of
“NVidia are not “sponsors” of the project. The company has not received, and would not expect, financial assistance from third party hardware companies.”
With all the Nvidia logos plastered over every flat surface they could find, I have a really, really hard time believing this claim.
ThIs!! ^^
ThIs!! ^^
This is PCPerspective we take
This is PCPerspective we take Nvidia as Gods word.
lol Plastering your logo on
lol Plastering your logo on something because your a sponsor and doing it as advertising because you did some work for them are two different things. It’s no different than the guys doing your roof putting a sign with their company logo and phone number on your lawn.
The recent day 1 driver
The recent day 1 driver updates from nvidia have been awful, the wither and project cars update are among them. While my friends on AMD cards had a little performance hit due to idiotic nvidia features they could at least play the game and not have it crash every 10 min. A significant nr of people needed to roll back just to be able to play. A problem i did not have on my former 7870, using a 970 now.
Having a day 1 driver ready counts for jack shit if it works like crap, something AMD seems to have learned, and dissing them for it while giving nvidia a free pass on shit drivers does not seem like objective review… ijs…
You’re mental. EVERYBODY
You’re mental. EVERYBODY knows that AMD drivers always always always suck, and Nvidia drivers come gold-plated.
(/s)
ProjectCars isn’t Gameworks.
ProjectCars isn’t Gameworks. You aren’t going to develop for nVidia if you are releasing on consoles with AMD hardware. Why does Project Cars play perfectly well on the consoles but not on AMD based PC? Because of the drivers?
Perhaps because the console
Perhaps because the console port is tuned appropriately for the fixed hardware & the PC port had nvidia input to redress the performance balance of a straight from console GCN port? PhysX (cpu) is used for some physics calcs in the PC port. Latest AMD drivers only improve performance by 10% & Win10 by 25% apparently. This is what happens when architectures diverge. Nvidia basically made Maxwell a great gaming card for now, all compute capabilities are reduced & DP bye-bye. What will be the outrage if AMD does the same in corresponding games?
“the fact that you can get
“the fact that you can get identical performacne for $350 less is a great thing”
I don’t want acne!
Well 980ti performance wise
Well 980ti performance wise is about where it was expected, Question is now is if AMD radeon fury is gonna be able to be fast enough to justify price if the rumored 850$ is true.
The compute performance alone
The compute performance alone on the AMD Fury will justify the price to the prosumer at 8.6 TFLOPs. Titan Z is the closest single precision at $3000 a dual gpu with 8 TFLOPs & lousy double precision. If it keeps its double precision intact unlike Maxwell based cards it will destroy anything they can offer until next year.
For gamers compute
For gamers compute performance means absolutely dick. If it ment anything to gamers then AMD would been massively ahead since 7000 series cards.
Oh the irony.
Anyway, what’s
Oh the irony.
Anyway, what’s keeping back compute on games is Nvidia and their crappy compute cards. There’s a reason TressFX is faster than Hairworks even on Nvidia cards, and that is because it’s based on directcompute.
Maybe AMD needs to get better
Maybe AMD needs to get better at DX11 tessellation which is A STANDARD. Funny how AMD fans want it nvidia to make things at standard which hairworks uses. Must hurt pretty bad knowing AMD’s top dog 290x card gets BEAT by a 750ti in tessellation performance.
Well, they got better
Well, they got better tessellation, with Tonga.
Kinda sad they got better
Kinda sad they got better tesselation with tonga when I had to listen to and read the bloviating amd fan gasbags blabbbering AMD had tesselation hardware in the HD2900 and it shows how awesome they are and how far ahead they are in tech, and blah blah blah blah blah !
That turned into – for the past 3 or 4 YEARS… ” wahhhh wahhh nvidia is forcing tesselation on blank walls and along sidewalks and no one needs 16x tess or more ever ! ”
Thus, the stupdendous hardware amd junkie wacko went from insanely incoherent and inconsequential hardware braggadocio to blubbering and whining in victimhood, again, when their holier than thou amd epic failed.
Hairworks is tuned to
Hairworks is tuned to nvidia’s front end geometry strength on higher-end GPUs, but is a dumb way of impelmenting hair. It basically occupies all pipeline stages GS-HS-DS-VS. >64x amplification is pointless when at the sub-pixel level. 8XMSAA is the icing on top… So no, even single/dual primitive (tris) pipeline gpus from nvidia also suck. The cost to Maxwell is static scheduling – read dumb…
It will be interesting to re-visit this stuff with DX12, when nvidia’s better DX11 multi-threaded performance (2theads/cycle) won’t matter as their command processors are limited compared with AMDs supposed superior command rate of >4threads/cycle. Maxwell’s current static model may indeed hurt moving forward, but GCN aint no panacea either and maks a different set of compromises. Time will tell whether DX12 & compute will hurt Nvidia buyers.
* The irony of posting this
* The irony of posting this in a Titan review.
Always have a hard time with
Always have a hard time with “irony.” Can you please explain the irony here? Thanks much in advance.
Most likely not, and I think
Most likely not, and I think this is why Nvidia shot first at $650. They’ve basically pre-emptively undercut Fury’s pricing. It will have to beat 980Ti/Titan X by 15-20% to keep that $850 price tag, and if it is +/-5% it is going to be $500-$600 max. Certainly a far cry from $850 aspirations.
Howz that Titan X going for
Howz that Titan X going for you? Ouch that’s gotta hurt.
Just a suggestion, but it
Just a suggestion, but it would be helpful when you hover your mouse over the charts that a description would popup of what each is or what the measure is of (FPS by Percentile, Frame Variance, Frame Times, etc). I can’t be the only one who really has no idea what most of the charts are depicting.
Love the site and content, fantastic article. Keep up the great work!
Do you mean a more detailed
Do you mean a more detailed description? Because the graph does indicate Frame Variance, etc. in the title of it.
Yes exactly, what frame
Yes exactly, what frame variance is and the measurement taken is of (and the other graphs). I know you have an entire article on this but a quick sentence recap would be nice.
SOLD !!
This Card is going
SOLD !!
This Card is going to find a new home in my gaming Pc as soon as a decently overclocked version hits retailers. My stock 780 at 1440p is showing its limits with Witcher 3 and im sacrificing a lot of eye candy just to keep a reasonable fps rate. The Gsync monitor helps but i must admit I am an eye candy addict and the 780 cant deliver at 1440p. I will moce the 780 into another PC at the cottage for 1080p gaming and the 760 there to my family room pc at home.
Debated getting the TitanX but just couldnt justify the cost for single monitor 1440p gaming ..this I can justify and i think i can get 2 years like the 780 out of it before i upgrade and shuffle cards again 🙂
Waiting for Gigabyte
Waiting for Gigabyte Windforce G1 Gaming edition.
Well, almost bang on the
Well, almost bang on the buck. This price also tells me about the other side as well.
How long until someone finds
How long until someone finds out its missing something like the 970 ?
the 970 wasnt missing
the 970 wasnt missing anything at all..it was a different way of utilizing the memory and a different memory architecture design..it was marketing not being clear on the packaging and advertising about that change..but the consumer didnt miss out on anything and got the full 4 gigs when they needed it.
Ummm, it was missing TMUs &
Ummm, it was missing TMUs & L2 cache, which resulted in the 3.5GB+0.5GB memory partitions…
[b]Ryan do we have definitive confirmation that all TMUs & cache are retained with the removal of 2 SMMs?[/b] The block diagram is only marketing as TMUs & L2 (especially) are tied to SMM clusters.
Let me answer my own
Let me answer my own question…
Excising 2xSMMs @ 4pixels/cycle = 88ROPs for singe cycle operation even though 96ROPs are specced. TMUs are down to 176 from 182, but the L2 partition & memory controllers are apparently unaffected, so performance/clock will be worst case ~8% lower than Titan. With a bit more power headroom, 980ti will trade performance lead with Titan depending on workload bottleneck. Not too bad for 6GB & $350 less.
I am waiting to see test to
I am waiting to see test to determine specifically what the situation is. It would be stupid for them to misrepresent the actual memory bandwidth again, so I am leaning toward it being correct and equivalent to the Titan X. There is also the fact that the the 980 Ti performs almost exactly the same as a Titan X, even though it has less hardware. This implies that they have the same bandwidth and they are also memory bound. They may get a good boost out of memory over clocking, although I assume that GDDR5 has been pushed about as far as it will go.
I currently have 2 Gigabyte
I currently have 2 Gigabyte Windforce G1 Gaming 980’s in SLI.
That should last me a while (gaming at 1440p on a 144Hz G-Sync panel).
I’m waiting to see what Pascal brings to the table.
Yes, indeed……….I need
Yes, indeed……….I need one of these!
New GPU releases are about as
New GPU releases are about as exciting as CPU releases have been. Gone are the days when we saw dramatic performance increases with each new generation.
I haven’t been moved to buy a
I haven’t been moved to buy a GPU since the 8800GTX in 2007, for £340
Since then it seems everything is just a rebrand of a rebrand with no new advancements, well unless you want to spend a thousand pounds that is.
And all this amd/nvidia is getting real old.
zzzzzzzzzZZZZZZZZZzzzzzzzzzzz
zzzzzzzzzZZZZZZZZZzzzzzzzzzzzZZZZZZZZZZZzzzzzzzzzzzzzz
If this was a preemptive move
If this was a preemptive move by Nvidia anticipating the fuji release, I wonder what they know. Impressive card. Just sucks that flagships these days are so expensive.
Maybe they know something.
I
Maybe they know something.
I certainly don’t recall a GPU review on a Sunday.
Hmm, the only thing I
Hmm, the only thing I remember to ever come out on a Sunday was nvidia’s response to the 970 debacle.
Now with this card reveal behind us… On to Fiji!
Computex starts at Sunday 6pm
Computex starts at Sunday 6pm PST. They want to demo this thing on the show floor.
Ordering 1 asap when
Ordering 1 asap when available in the shops here in the UK!
My acer predator xb270hu 2560×1440 ips 144hz gsync monitor needs more GPU power!
Guys, its not 1999 please
Guys, its not 1999 please update your game testing suite. Why not Witcher 3? Or GTA5 , or modern games?
Ryan,
Why on the R9 295X2 did
Ryan,
Why on the R9 295X2 did it appear to have larger frame time variance in games at 1440p than at 4k?
Likely because there is less
Likely because there is less of a GPU bottlencek at the lower resolution and thus there is more push back on the CPU, where the multi-GPU management takes place.
Its interesting to see the
Its interesting to see the 290x being so close to the GTX 980.
What I don’t get is how both card have very similar gaming performance, same amount of memory, but one retail for around $280 and the other $500?
Side note: the gap get small to non existent at 4K between those two cards, telling that the perf difference at lower res might be driver related.
Dx12 will help equalize the driver situation, and it would be hilarious if a $280 card ends up out performing a $500 card, even if its just by a few frames.
And one is 1.5 years old.
And one is 1.5 years old.
Still stuck at 28 nm, so we
Still stuck at 28 nm, so we haven’t been getting the same generational gap we did with previous generations.
It’s hard to get excited
It’s hard to get excited about a $650 GPU when the price/performance falls off a cliff after you get past $350. With how well SLI works these days I’d be buying 2×970’s long before a 980 Ti.
The NVIDIA card takes way
The NVIDIA card takes way less energy? No?
Very nice review. I
Very nice review. I considered the acquisition of an awesome 980 Ti card. Turns out Nvidia’s new Kepler performance driver killed off that idea. My single 780 @ 1200 – 1250MHz in game in The Witcher 3 is running beautifully. At 2560×1440 in the high 40s or 50s fps with most details maxed out, other than hairworks.
Win. 🙂
ryan, y never test the 6gb
ryan, y never test the 6gb vram really fully utilize? thanks.
As the other guy mentioned,
As the other guy mentioned, I’m curious as to why the 290x is so close to the 980 in terms of performance.
I would also much rather see a modded Skyrim testing than vanilla Skyrim.