A Beautiful Graphics Card
It was a surprise when it launched, it was a surprise when it showed up on our door. How about a review?
As a surprise to nearly everyone, on July 21st NVIDIA announced the existence of the new Titan X graphics cards, which are based on the brand new GP102 Pascal GPU. Though it shares a name, for some unexplained reason, with the Maxwell-based Titan X graphics card launched in March of 2015, this is card is a significant performance upgrade. Using the largest consumer-facing Pascal GPU to date (with only the GP100 used in the Tesla P100 exceeding it), the new Titan X is going to be a very expensive, and very fast gaming card.
As has been the case since the introduction of the Titan brand, NVIDIA claims that this card is for gamers that want the very best in graphics hardware as well as for developers and need an ultra-powerful GPGPU device. GP102 does not integrate improved FP64 / double precision compute cores, so we are basically looking at an upgraded and improved GP104 Pascal chip. That’s nothing to sneeze at, of course, and you can see in the specifications below that we expect (and can now show you) Titan X (Pascal) is a gaming monster.
Titan X (Pascal) | GTX 1080 | GTX 980 Ti | TITAN X | GTX 980 | R9 Fury X | R9 Fury | R9 Nano | R9 390X | |
---|---|---|---|---|---|---|---|---|---|
GPU | GP102 | GP104 | GM200 | GM200 | GM204 | Fiji XT | Fiji Pro | Fiji XT | Hawaii XT |
GPU Cores | 3584 | 2560 | 2816 | 3072 | 2048 | 4096 | 3584 | 4096 | 2816 |
Rated Clock | 1417 MHz | 1607 MHz | 1000 MHz | 1000 MHz | 1126 MHz | 1050 MHz | 1000 MHz | up to 1000 MHz | 1050 MHz |
Texture Units | 224 | 160 | 176 | 192 | 128 | 256 | 224 | 256 | 176 |
ROP Units | 96 | 64 | 96 | 96 | 64 | 64 | 64 | 64 | 64 |
Memory | 12GB | 8GB | 6GB | 12GB | 4GB | 4GB | 4GB | 4GB | 8GB |
Memory Clock | 10000 MHz | 10000 MHz | 7000 MHz | 7000 MHz | 7000 MHz | 500 MHz | 500 MHz | 500 MHz | 6000 MHz |
Memory Interface | 384-bit G5X | 256-bit G5X | 384-bit | 384-bit | 256-bit | 4096-bit (HBM) | 4096-bit (HBM) | 4096-bit (HBM) | 512-bit |
Memory Bandwidth | 480 GB/s | 320 GB/s | 336 GB/s | 336 GB/s | 224 GB/s | 512 GB/s | 512 GB/s | 512 GB/s | 320 GB/s |
TDP | 250 watts | 180 watts | 250 watts | 250 watts | 165 watts | 275 watts | 275 watts | 175 watts | 275 watts |
Peak Compute | 11.0 TFLOPS | 8.2 TFLOPS | 5.63 TFLOPS | 6.14 TFLOPS | 4.61 TFLOPS | 8.60 TFLOPS | 7.20 TFLOPS | 8.19 TFLOPS | 5.63 TFLOPS |
Transistor Count | 11.0B | 7.2B | 8.0B | 8.0B | 5.2B | 8.9B | 8.9B | 8.9B | 6.2B |
Process Tech | 16nm | 16nm | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm | 28nm |
MSRP (current) | $1,200 | $599 | $649 | $999 | $499 | $649 | $549 | $499 | $329 |
GP102 features 40% more CUDA cores than the GP104 at slightly lower clock speeds. The rated 11 TFLOPS of single precision compute of the new Titan X is 34% higher than that of the GeForce GTX 1080 and I would expect gaming performance to scale in line with that difference.
Titan X (Pascal) does not utilize the full GP102 GPU; the recently announced Pascal P6000 does, however, which gives it a CUDA core count of 3,840 (256 more than Titan X).
A full GP102 GPU
The complete GPU effectively loses 7% of its compute capability with the new Titan X, although that is likely to help increase available clock headroom and yield.
The new Titan X will feature 12GB of GDDR5X memory, not HBM as the GP100 chip has, so this is clearly a unique chip with a new memory interface. NVIDIA claims it has 480 GB/s of bandwidth on a 384-bit memory controller interface running at the same 10 Gbps as the GTX 1080.
Other than these changes, and corresponding improvements in texture units and ROP count, there really isn’t anything architecturally different in the Pascal-based Titan X over a GeForce GTX 1080. Just more, better and faster. If you are new to NVIDIA’s latest Pascal architecture, product features and what the move to 14nm nets them, you definitely should read our GeForce GTX 1080 review that covers all of that!
What will you be asked to pay for this performance? $1200, going on sale today, and only on NVIDIA.com, at least for now. Considering the prices of GeForce GTX 1080 cards with such limited availability, the $1200 price tag MIGHT NOT seem so insane. That's higher than the $999 starting price of the Titan X based on Maxwell in March of 2015 - the claims that NVIDIA is artificially raising prices of cards in each segment will continue, it seems.
The NVIDIA Titan X (Pascal) Graphics Card
Our time was short with the new Titan X, as our team prepares for a three week whirlwind of events, but we wanted to get a quick review of this beast out the door ASAP.
The new Titan X features the same design language started with the GTX 1080, a rif on the now aging design for NVIDIA reference products. This includes a blower style cooler with an illuminated GeForce GTX logo along the top of the card (interestingly, one of only a few places I see referencing GeForce with this product) and a window to see the heatsink under the shroud.
Rotating the card around the back we find a full cover backplate on the Titan X with an optional segment on the back half you can remove to improve airflow on adjacent graphics cards in SLI. The backplate even has a custom Titan X stamp on it.
Though the shroud design is shared with the GTX 1080, the Titan X goes with a black out color scheme and a chrome “TITAN X” logo along the front.
Display connectivity remains unchanged: three full size DisplayPort connections, one HDMI 2.0a and a dual-link DVI connection for legacy displays.
With a 250 watt TDP, the card includes both a 6-pin and an 8-pin external power connection. This is more than enough to hit 250 watts but allows the card to draw as much as 300 watts when overclocked.
Titan X (Pascal) includes a set of SLI connections to support the new high bandwidth SLI connections, though still only in 2-Way SLI officially.
Till recently I was not aware
Till recently I was not aware that over-clocking GPU can put motherboard at risk.
Does any motherboard vendor advertise maximum power over PCIe?
The PCIe spec states that
The PCIe spec states that video cards can only draw a maximum of 75W over PCIe. To my knowledge, no consumer motherboard manufacturers increase this as it would be outside spec and require the vendor to remove official PCIe support and advertising (a similar thing is done with video cards with more than a 6 + 8 power connector).
True. Motherboard
True. Motherboard manufacturers, especially of enthusiast overclocking lines such as the current Z170 Intel boards, are particularly keen on ensuring that their premium overclocking motherboards crap out at anything over 75 watts when users attempt a power hungry overclock.
In addition, Motherboard manufacturers also ensure that there is absolutely no tolerance on their cheap motherboards to provide an iota above the 75W ‘spec’. They are particularly interesting in having their boards fail in such a scenario as they can then use the RMA process as an opportunity to showcase their exemplary Customer Support.
Kidding aside, we’ve found
Kidding aside, we've found that seven premium boards don't like high draw on the slot power. Our GPU testbed is a good quad-rated board and it drops 0.5V when supplying a single GPU over the current limit. These guys really need to stick to drawing most of their power from the 6/8 pin connectors.
wait, is that because of the
wait, is that because of the slot power? I’ve found that GPU load typically drops 12v value. I assumed it was just the PSU itself not putting out the voltage because of the load.
You seem to suggest its the motherboard load that causes it… but that seems unlikely. Do you measure the voltage from the PSU directly or just use the reading from monitoring software? Would be solved by checking the PSU output voltage at the same time.
That is a very good point. It
That is a very good point. It is unlikely he has the PCIe slot adapters needed to actually measure the voltage at the slot.
I see the day when a single
I see the day when a single GPU has an extended “bottom” and spans two PCIE Slots.
“Titan X is 70-120% faster
“Titan X is 70-120% faster than the fastest single GPU AMD graphics card”
Please test against RX480 in 4 way crossfire.
That’s not a single GPU so
That’s not a single GPU so the claim doesn’t apply. As it is, Xfire scaling is hit or miss, so odds are the Titan X would still win a majority of tests.
I’d actually be willing to do
I'd actually be willing to do this, if I had four cards! 😀
Time to call AMD, they should
Time to call AMD, they should help you try to be that beast of a card…
But the power draw x2 as much tho. so really are you winning?
Maybe you can’t find 4 cards
Maybe you can’t find 4 cards easily, but maybe you can find the latest drivers?
Maybe the latest drivers had
Maybe the latest drivers had some reported oddball issues that Ryan didn't want to waste his limited testing / writing time screwing around with?
I find it hilarious that you
I find it hilarious that you have to resort to petty bickering with another reader. Seriously dude, grow up and be the professional that Josh, Ryan and Sebastien are and stop arguing and trying to prove you are better than a reader.
Lol @ the people the internet
Lol @ the people the internet brings out.
Pretty much.
Pretty much.
Really. And you think your
Really. And you think your behavior is any different? You are defensive and sarcastic and disparaging. You should be receptive and cordial. Leave being dickheads to us readers.
How dare you call me a
How dare you call me a "professional"!
Josh: The Professional
Josh: The Professional (1994)
Nice movie.
:p
DRARARARRARA HAHAHAARARRARAA
DRARARARRARA HAHAHAARARRARAA HAHAHAHAHA
Actually, that was a typo
Actually, that was a typo from a copy/paste that I brought over. Thanks for pointing it out. Updated to the following:
AMD: 16.7.2
NVIDIA: 368.98
Why do you come around here if you don't like what I have to say? Honestly curious.
Who cares?
The guy gets
Who cares?
The guy gets wrecked with every comment he makes. Kind of fun to see actually.
A! OK. People are shouting at
A! OK. People are shouting at reviewers by not using the latest latest drivers. I think you can understand that being a number of versions behind would have even stronger reactions.
About your question.
Two reasons.
In the land of the blind you follow the one eyed people and while I might have some concerns about things you(the site) post and most times about things you don’t post, I do agree like most people, that you do an excellent job in the technical side.
Second, not every one who goes hysterical in here wants to harm the site. Some people just want to push you to get better and more objective. A few years ago I was telling you that your promotion of AMD’s A10 7850K was just too obvious.
Shouldn’t your testing have
Shouldn’t your testing have been done using the 369.05 WHQL drivers that were released specifically for the Titan X(Pascal) by nVidia on 8/2? Just wondering…
How many do you have? I’d
How many do you have? I’d loan you mine for a few days to pull it off 🙂
Have PCPerspective, Paul’s
Have PCPerspective, Paul’s Hardware, JayzTwoCents, and TekSyndicate join forces for the 4 way 480 vs the TitanX.
(this) x10^32
(this) x10^32
Don’t forget to test that
Don’t forget to test that config against a single Titan X on games that don’t support multi GPU while you’re at it! dipshit…
The funny thing is, even
The funny thing is, even though I sure it would have horrible scaling at that fourth card, it would still be cheaper to buy 4 RX 480s than to buy a single TitanX.
4 way crossfire? Wow you AMD
4 way crossfire? Wow you AMD fanbois are getting desperate
There are any number of ways
There are any number of ways to interpret that request that have nothing to do with “fanbois” or desperation. For example:
“Damn, that is one efking wicked graphics card. Is it even possible for a smash of RX 480s to catch up to it?”
“Nvidia’s giving up on more-than-2-way SLI. AMD is all about supporting up-to-4-way CFX. Let’s grab a new Titan X, two 980 Ti’s, two vanilla Fury’s, and four RX 480’s and just see what happens.”
“Just what kind of GPU would AMD need to build to compete with the Titan?”
“Hey, let’s have some fun with this.” Or, depending on the group, “Here, hold my beer, I got an idea.”
Why do you imbeciles always have to default to fanboy accusations?
Both SLI and CrossFire suck
Both SLI and CrossFire suck big red donkey wankers, it’s Vulkan/DX12 multi-GPU adaptor managed via the API with the games/gaming engines managing all the GPU resources. That way any GPU plugged into the computer can get gaming workloads sent by the games/gaming engines. You can be damn sure the gaming engine makers will be competing with each other to get their gaming engines to get the most out of all the available GPU power available on a PC via Vulkan/DX12! And with the whole damn gaming industry, OSs industry, API developers working to get the best multi-GPU scaling going for VR and 8k gaming and getting every last bit of performance out of multi-GPU adaptors with the Vulkan/DX12 multi-GPU adaptor maybe even 4 RX 480s, and 4 GTX 1060s could be tested, JHH may not allow GTX 1060s to use SLI, but what is JHH going to do to stop multi-GPU adaptor in the Vulkan/DX12 APIs from working!
everything he possibly can
everything he possibly can do, he does NOT want any graphics card linking with HIS, which is why multi-adaptor i.e Radeon and Geforce was essentially “killed” many generations ago, primarily because the Radeons were pushing much better results with PhysX than Nvidias own products were able to do, cant have that lol.
Lord knows JHH/Nvidia are all about $$$$$$$$$ not making happy customers ahem Gsync when they so could have jumped on board Freesync so everyone gets the BEST experience possible no matter if you choose to buy a Nvidia, AMD, Via or whatever.
Am sure if he had it his way (surprised has not done so yet) would be planting a chip in every Nvidia gpu that would prevent multi-gpu use beyond “spec” i.e something like a 1050Ti would NOT be able to use more than one per system.
Gratz in wanting to show your product as very good, terrible for us consumers because we can have real choice if they could all just “get along”
The thing with explicit multi-adapter, multi-gpu via DX12/Vulkan, it relies on the developer to ALLOW i.e coded for its use on a per game basis, Nvidia sure as hell are not going to force the code nor will AMD to truly implement it via their own product drivers which may pse performance issues or prevent their own products from selling as they would like.
Game devs are awesome as games so are not easy to make high spec games we love to play, but, they are usually quite horrid at giving the best performing/robust code possible for that specific title, for example, DX12 titles, there is like 2-4 that use it quite well, but, are not using EVERYTHING DX12/Vulkan are capable of either, the remainder that have DX12/Vulkan on the box really do nothing from it IMHO, it just “sells” kind of like “made for Windows version X, Y, or Z” often does NOT work as it should lol.
no thermals ?!
I have a
no thermals ?!
I have a feeling this card will throttle just like the Maxwell v2 Titan X did
There are some temperature
There are some temperature listings on the overclock page.
“testing at 4K with the GTX
“testing at 4K with the GTX 1060 running at a +150 MHz offset.”
1060 should be Titan X
Whoops thanks!
Whoops thanks!
It’s amazing to see that a
It’s amazing to see that a 5960X is the bottleneck when running GTA5. Just how taxing on the cpu is that game?
If, for example, the game is
If, for example, the game is coded to operate on only 4 cores, then the 5960X is no better than an underclocked i5-4690K.
(I would honestly have no idea how many cores GTA5 uses, I’m just brainstorming.)
I actually have a 4690k and
I actually have a 4690k and all 4 cores are almost always pinned at 100% while I’m playing gta5. I know that the game needs more than a quad core, but I’m surprised that at hyperthreaded eight core isn’t enough.
No CPU is powerful enough to
No CPU is powerful enough to handle the power vorus 2D menus known as Scaleform UI.
I would LOVE to see Ryan do an investigation into this phenomenon, but i dont know if anyone will.
I, and many others, have been bitching about Scaleform UI menus being power viruses that max out CPU and GPU arbitrarily.
Most modern games use it and it seems that in many implementations it causes the GPU to boost to max clock and thermal limits for no real reason.
Same happens with 1080 SLI
Same happens with 1080 SLI other games other games i seen this happen BF4 siege of shanghai map
Edit
like other said its
Edit
like other said its utilization not hardware bottlneck
NVidia’s launch schedule is
NVidia’s launch schedule is much tighter this year, no? Makes me wonder how long before the 1080ti comes out….
No the real question here is,
No the real question here is, how long before a Pascal refresh.
We will see a 1080 Ti and
We will see a 1080 Ti and then next line up will be Volta in Q2 2017.
Skipping Pascal for sure. Not impressed AT ALL.
Considering that GM200 and
Considering that GM200 and GM204 showed the least generational improvenents in ages, because they were 28nm just like GK110 and GK104, what kind of improvements were you expecting?
GP100, GP102 and GP104 are all huge improvements on what they replace, unlike the underperforming 900 series.
Pure rubbish. Huge
Pure rubbish. Huge improvement over the cards they replaced.
More like “launch” … when
More like “launch” … when can you actually just buy one and not wait a month?
So 4K gaming for the
So 4K gaming for the mainstream is years off by the looks of it.
I love PC gaming but not at this price.
That’s one fast card tho.
Thanks for the review.
this year’s 1070 is as fast
this year’s 1070 is as fast as a Maxwell Titan X. So more than likely an 1170 will be as fast as a Pascal Titan X, so I’d say two years if mainstream is considered no more than $250 (GTX 1260).
1170 will probably be as fast
1170 will probably be as fast as 1080. Don’t be fulled by the jump from Maxwell to Pascal. We moved from 28nm to 16nm FinFET with that change.
970 was on par with 780 Ti,
970 was on par with 780 Ti, both on 28nm.
Nothing about Pascal is impressive so far. Same performance per dollar, and still lacking hardware async compute support.
Asynchronous compute would
Asynchronous compute would help to some degree, but more memory bandwidth and decreasing latency between the CPU and GPU would lilely do much more. There are also many asynchronous compute techniques, so which are you referring to?
Incorrect. Pascal has
Incorrect. Pascal has built-in hardware async compute:
http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9
The biggest difference in Pascal’s async compute capabilities over Maxwell 2 are 1) dynamic load tasking (vs. static in Maxwell 2), and 2) 25% increase in overall concurrent execution.
The bottom line is that Pascal does have true hardware async compute functionality. Not saying it’s better than AMD’s approach, but it is there nonetheless.
No, not true. 1070 can’t
No, not true. 1070 can’t touch Titan X Maxwell when both cards get overclocked to their max (2.1GHz vs 1.5GHz).
Incorrect, though the
Incorrect, though the performance is close when comparing a stock clocked Pascal card to a highly overclocked Maxwell card of the same class (i.e. GTX1080 vs. 980Ti). But don’t even try comparing a Maxwell Titan X to a Pascal Titan X…the ‘old’ Maxwell Titan X gets its ass kicked no matter what the clocks 😉
2-3 years i would say, for
2-3 years i would say, for mainstream…
stay tuned!
“the claims that NVIDIA is
“the claims that NVIDIA is artificially raising prices of cards in each segment will continue” What do you mean claims? The numbers don’t lie its a fact.
The original Titan was
The original Titan was created for two reasons. Marketing and creating newer higher price points in the market. Nvidia saw that integrated graphics in AMD’s APUs and latest Intel processors where killing the low end market, so they needed more and pricier models in the hi end category. In the past you had multiple NEW low-mid range models. Today Nvidia gives you 1-2 new low-mid range models later and 2-4 new models in the hi end market.
Was it? I thought it was also
Was it? I thought it was also the cheapest way to get over 1 TFLOPS double precision with 6GB of RAM. The K6000 was $5,000. The Titan was an amazing value for anyone who needed DP FLOPS but not necessarily ECC.
Yes and the same for the
Yes and the same for the AMD’s Radeon Pro Duo for a developer to be able get at the pro drivers and develop for the pricy Pro versions without having pay as much. So Nvidia did have the same thing in mind for the Titan X, but did Nvidia even allow for developers using the Titan X to get at the pro versions of Titian’s drivers. $1500 is still less costly for AMD Radeon Pro Duo customers than $2000 to $4000, the same for Nvidia’s Titan X used for compute without having the expensive pro hardware/drivers. AMD is a little bet more of a bargain for developers by offering access to pro drivers for development, and I am not sure if Nvidia offers the same driver deals with Titan X for developers.
That $1500 looks like a
That $1500 looks like a bargain when you consider 2x Furyx at $400 a piece. 88% more costly. Almost double. Not so much. Even compared to Titanx Pascal it costs 25% more.
Both cards performance will be in the same neighborhood when there is good cross fire scaling for the pro duo. When there is not it will lose by 70-120% as it will be a single Furyx core vs the beastly Titanx.
The Titanx has 31% less compute than pro duo. It essentially has 3x the vram as it is not additive in crossfire. But even comparing 8 gigs vs 12 gigs then Pro duo would still have 33% less vram to use.
But AMD gives the pro drivers
But AMD gives the pro drivers with the deal, so the Radeon Pro Duo can be used to develop for the production/professional GPU cards that cost much more! Is Nvidia offering the Titan X users the pro/Quadro drivers or will JHH make the developers buy the more costly $5000 Quadro SKUs to get at the Pro drivers. That pro driver certification adds a lot of cost to the pro SKUs price, so AMD’s Radeon Pro Duo SKU(at $1500) is a grate deal that allows developers to get at the Pro drivers from AMD.
2 Fury X gaming SKUs do not have the pro drivers, so no developer can use the gimped down Fury X gaming drivers for Pro development. AMD’s Radeon pro/real professional drivers are worth almost as much if not more than the price of the Radeon Pro Duo’s hardware, and the certification of the Professional drivers is very costly process to get the drivers to work with the professional graphics software.
The real deal with the Radeon Pro Duo is the pro drivers, and not the hardware as much! It’s good to have the GPU hardware but its great to have access to the Pro Drivers without paying $2500+ for the FirePro(Now called Radeon Pro WX series) hardware.
Gaming drivers are a joke in the professional world, as gaming drivers are gimped for FPS and not accuracy. Please do not confuse the two as they are very different for very different workloads, and the pro drivers can and do cost just as much as the hardware, and even more over the long term as professional drivers come with long term professional driver support, and that is a very high cost to maintain professional driver support! That is what makes the professional GPUs so expensive.
And look where Titan is now.
And look where Titan is now. Just an overpriced gaming card with extra memory.
The original Titan was a semi pro card because it needed to make sense. It was creating a new brand. But was also advertised and pushed as the absolute gaming card creating a higher price point also for the gaming cards.
In the past you had plenty of mid-low end models and only one hi end. Now you get 3 hi end models – Titan, X80Ti, X80 – and barely a new low end.
That was Nvidia’s business plan from the beginning because Nvidia doesn’t sell desktop x86 processors with integrated graphics or chipsets with integrated graphics.
AMD is just as guilty. In
AMD is just as guilty. In fact, one could argue that Nvidia wouldn’t be able to increase prices if there were actually any real competition in the market place. Where the hell is the Polaris high end and enthusiast cards? The RX490 is way overdue, yet is nowhere to be found, and AMD has yet to officially announce specs or a release date…
Meh, I’ll just buy another
Meh, I’ll just buy another 980 used for $250, lol.
good luck when SLI isn’t an
good luck when SLI isn’t an option i ran SLI 980 now i have 1080 SLI at least one 1080 almost as fast as my 980 SLI so good failsafe when SLI isn’t option.
I only have a 1080p Gsync
I only have a 1080p Gsync 144hz, but you may be right. It’s not like I’m running out to do it now, just saying the value proposition is there. On the flip side I’m not really interested in games made by lazy developers.
I own two GTX980Ti cards
I own two GTX980Ti cards (still in my main system), an MSI GTX1080 Sea Hawk X (Hybrid), and a Titan X arriving tomorrow. I can tell you right now that a single GTX1080 does not come close in most games to performance vs. two GTX980Ti cards in SLI. The gap does get tighter at higher resolutions, but still not the same or near the same performance. Now, a single Titan X vs. GTX980Ti SLI will be a whole different story 😉
Oh look. 30% extra
Oh look. 30% extra performance for twice the price. Die AMD die.
—————————–
By the way. Maybe you should update your AMD scores with newer AMD drivers? Just see it as an excuse to write an article with title:
Crimson 16.5.2 vs 16.7.3
or something.
https://community.amd.com/thr
https://community.amd.com/thread/203347
http://www.overclock.net/t/1606566/16-7-3-driver-for-amd-rx480-keeps-crashing
https://m.youtube.com/watch?v=Ekdte9UoGyE
I think we might just pass on that one until AMD gets it right.
Well you haven’t passed one
Well you haven’t passed one driver version. You are using drivers 2-3 months old. Do a tech site needs someone to remind them that they have to update drivers more often than the casual user? I guess so.
Yes, tech sites that give a
Yes, tech sites that give a crap need to use drivers that actually work, especially when trying to get a review out the door with limited testing time. We try to not just throw a new driver into the mix until we can make sure the thing works properly, and it's pretty obvious that one doesn't, so no use wasting the time on it.
Also, we typically get a heads up from AMD if large performance improvements are expected in an upcoming driver. We got no such notice since the PCIe power fix, but I think Ryan was only one version behind this latest one regardless.
Ryan wrote that it was a
Ryan wrote that it was a typo, the end.
Yes, the end for
Yes, the end for AMD…$6/share…pffft, nuff said. And RX 490 is AWOL 😛
#rekt
#rekt
That was a copy/paste typo. I
That was a copy/paste typo. I am using 16.7.2, same as for the RX 480 review.
When will you be finished
When will you be finished with the Saphire 480 Notro review? You seemed to be well into it at the time of the Nitro podcast.
We’re in the pre-QuakeCon
We're in the pre-QuakeCon crush. Might be a few weeks.
Will you test it in DOOM
Will you test it in DOOM Vulkan/Async finally?
Probably
Probably never.
https://www.youtube.com/watch?v=oqrvXzXxlgM
Hey Ryan, do you think if AMD
Hey Ryan, do you think if AMD called it the RX470 that people wouldn’t be so quick to compare it to obviously higher end equipment? Marketing and fanboys eh?
Included in actual reviews?
Included in actual reviews?
This is not a gaming, but a
This is not a gaming, but a deep learning card. So lets make 10 sides of gaming tests. :^)
Most people who do serious
Most people who do serious work with GPUs would need to verify their code on it themselves or read a site like The Next Platform(i post on there constantly too).
Wrong, Its completely a
Wrong, Its completely a gaming card. The deep learning (or whatever) version is the P60000 Quadro.
You are actually incorrect.
You are actually incorrect. You might want to re-read Nvidia’s press release for the Titan X. There is a reason it’s officially NOT part of the “GeForce” family. Adding confusion to it all, to save time and to get the cards out for sale, Nvidia used an old shroud design on the Titan X that reads “GEFORCE GTX”. My guess is that the second revision will have a new shroud that says “TITAN X” 😀
GPU Boost continues to make a
GPU Boost continues to make a mockery of TFlop ratings.
TF @ 1417 MHz = 10.2 (Rated Clock)
TF @ 1535 MHz = 11.0 (Markting Clock?)
TF @ 1660 MHz = 11.9 (Stock Avg Gaming Clock)
TF @ 1838 MHz = 13.2 (PCPer OC Clock)
It definitely makes things
It definitely makes things more complicated. In my view, you want these companies to be conservative.
I was actually very surprised
I was actually very surprised that an overclock of 1838Mhz was even attainable, let alone stable in benchmarking. I had thought that the stock heatsink and blower fan would not be up to the task. I can’t wait to get my Titan X on water, then we’ll see how much higher that GP102 will go!
*UPDATE* 5 October 2016
I’ve had my Titan XP watercooled now for over a month and am very happy with the performance:
Maximum GPU boost (120% power target, 92C temp target):
GPU @ 1864Mhz
GDDR5X @ 10000Mhz
Idle temp: 26C
Max temps @ full GPU load (benchmarking) 46C
Maximum manual overclocks @ stock voltage:
GPU @ 2086Mhz
GDDR5X @ 11000Mhz
Idle temp 28C
Max temps @ full GPU load (benchmarking) 50C
All temperatures include an Intel Core i7 3930K overclocked @ 4.5Ghz (1.41V) within the same cooling loop, so the temps are indeed awesome 😉
lol amd better hurry up
lol amd better hurry up because by the time vega comes out they will be on volta. If that happens they will be so far behind they will be stuck doing budget cards. They did that with there cpus and look how that worked out. Oh and the next time I have to here about 4x480s I am gonna kill myself. 4 way with cards causes so many problems. 2 is fine but even 3 is just garbage, I did it with my 680s and sold the 3rd 3 weeks later. Besides could you imagine the wattage draw on amd’s power hungry cards. AMD has nothing to even come close to Nvidia in performance. The 480 is so underwhelming they had no choice but to sell it dirt cheap. It is a k-mart card. I guess I am happy to see the people on wel-fare getting to play pc games with decent settings tho.
So you’re saying that nvidia
So you’re saying that nvidia will launch 2 new chips in the space of 6 months? Impressive, but doubtful considering the nvidia greed :p
By the way, try to sound like less of a complete dick. It’d help you being taken seriously.
I think GPUs are actually
I think GPUs are actually different. With CPUs the main issue would be single threaded vs multi threaded. But GPUs have no such distinction.
If AMD has a good single threaded CPU with fewer cores and did budget, there would be no problem unless intel somehow managed to push multithreading and exposed that flaw.
With GPUs, its all about price points and performance. The only way it would be a problem is if nvidia managed cheaper faster GPUs vs AMD. AS titan x at $1200 is hardly going to change much for the average gamer except maybe fool them into thinking that means all nvidia GPUs are faster.
The 480 was always targeting the price point it is now, and imo they aren’t selling it dirt cheap. The price some of these go for is silly. Your sentiments are silly actually. K-mart card? I bet you can’t even afford a titan X and here you are fronting. Waiting till nvidia drops a card in your budget range so you can pretend its a titan x.
New Article from PCPer:
New Article from PCPer: “Titan X fails PCIe specs, draws more than allowed when OC’ed”… I’m so glad you started this “issue”, which actually isn’t an issue at all. But I guess it was in AMD’s case?! 😮
Actually, the 480 in ‘fixed’
Actually, the 480 in 'fixed' form does the same when overclocked. We didn't make a big deal out of that when we retested it, for the same reason we are not making a big deal about it here.
Two problems with that.
1.
Two problems with that.
1. ppl who buy 480 rarely OC it, while those that buy Titan X will most likely do the OC
2. most ppl will buy custom 480’s which don’t have the issue, while you can only buy reference Titan X.
So, I think we have a serious issue here by what you said in the 480 article (even tho I think it was never rly an issue)
Overclocking is done at the
Overclocking is done at the risk of the owner. Use supplemental PCI-E power connector on the motherboard, and BOOM – no more issues. NEXT!
If i didn’t get my 1080
If i didn’t get my 1080 hybrid SLI i would have got this as seems perfect single solution for 1440/165 my only issue is this is nvidia exclusive so when you go to sell the warranty isn’t transferable also removing cooler to install aftermarket cooler voids warranty if they find out.
Agreed and is definitely a
Agreed and is definitely a bummer. Modding a $1200 graphics card is not for the faint of heart – even if in this case it’s the simple removal of a stock heatsink and installation of a water block. It is still something to be taken into account. I for one, have decided it’s worth the risk. My EKWB water block should be here by end of next week. If anyone is interested in seeing the results of the installation process let me know on PM – thanks!
The price in the UK is
The price in the UK is £1,099.00 and I’m really really tempted to upgrade my 980 Ti, but the 1080 Ti may be a thing though.
Speaking of the 1080 Ti, how do you think they will do the memory this time, considering the 1080 has 8GB, surely they can’t give the 1080 Ti the same amount.
Probably 12 on the Ti and
Probably 12 on the Ti and it’ll perform roughly like this card.
It’ll also be same G5x RAM. HBM won’t be a thing till next year for nvidia at the earliest.
Nvidia has been using HBM2 on
Nvidia has been using HBM2 on the P100 for months now.
is that even in use anywhere?
is that even in use anywhere?
They sell in batches of
They sell in batches of thousands for a single supercomputer. They are selling them as fast as they can make them just like Knights Landing and Fujitsu PrimeHPC FX100s.
Not really relevant, is it?
Not really relevant, is it?
It is a big thing for Nvidia
It is a big thing for Nvidia though. Your statement is false.
And on the
And on the HPC/Server/workstation SKUs is where that HBM2 will stay because the markups on the Real Pro GPU accelerator/workstation SKUs is much larger that any consumer SKUs! And until the HPC/Server/workstation market’s thirst for HBM2 is quenched the gaming market is not getting it’s hands on HBM2! Money talks, and JHH is all about the markups!
Top AMD GPU is now 70-120%
Top AMD GPU is now 70-120% slower than Nvidia!!! Holy crap! No wonder Nvidia is getting away with murder pricing!
400$ vs 1200$
400$ vs 1200$ ………
http://www.newegg.com/Product/Product.aspx?Item=N82E16814150742&cm_re=fury_x-_-14-150-742-_-Product
I always love when people
I always love when people compare current price to something that just came out at MSRP. The Furyx was $650 when it released.
Look I found a Furyx for $697 dollars does this count.
https://www.amazon.com/Sapphire-Radeon-PCI-Express-Graphics-21246-00-40G/dp/B01012TLSS
Anyways where are the benches against the pro duo at $1500. Looks like the Titanx Pascal would wreck it and it’s a single GPU and cheaper to boot. LOL
I’d compare best price vs
I’d compare best price vs best unless best was out of stock. $400 for a fury x is pretty decent as that is cheaper than a 1070 is right now (without lucky disconts anyway).
ultimately its of no concern. At least I would hope AMD is not thinking the way you guys are. For their sake. I would assume even you would buy a $500 AMD GPU if it gave you 80-90% the perf of a titan X.
The main obstacle for AMD is dx11 efficiency. For dx12 they should be fine beating a titan x at its base.
If AMD released a $350 card
If AMD released a $350 card that gave you 150% of the performance of a Titan XP, he still wouldn’t buy it, because it’s AMD.
Even as inept AMD is as a
Even as inept AMD is as a company, they would not sell such a card for only $350. It’s your idealization of AMD in that they price their products to help you. They do no such thing.
FX 9590 was priced at $850+ when released, it’s currently $190. Furyx was priced at $650 same as 980ti eventhough can’t beat it in most games.
No I still wouldn’t buy the AMD card because it would still be missing hardware Physx and still couldn’t handle tesselation and still have frame time graphs that look like earthquake seismograph. Among other things.
Let’s be realistic here. AMD is calling Vega enthusiast level, what do you think their price is going to be? Not no $350 unless it is barely faster than 2 4 gig Rx 480s.
Especially with hbm2 I’m betting it will be a lot closer to $1000 than $350. Unless performance is total crap. It won’t be because AMD will tweak and overclock it to the moon to get performance at least on par with Nvidia and wattage be damned. AMD fanboys don’t care about no stinking wattage.
Fury X/Fury were priced to
Fury X/Fury were priced to pay for all those years of HBM R&D and those costs need to be amortized. The low price on the RX 480, and RX 470 is to get market share and the revenues that go with increased market share! Revenues pay the bills and after the bills are paid if any cash remains then that is declared profits. AMD needs to increase its revenues to pay for the R&D, pay the salaries of employees, debt payments, and mostly to stay in business.
Profits are not as necessary as revenues, a business can go on for quite a while as long as there are not great amounts of losses each quarter. So AMD can keep going on as long as it has revenue growth, revenue growth can get the banks to lead even more short term debt money to keep the business going, and the banks, if they see revenue growth along with new products/product innovation and market share growth/new market growth, will continue to lend and continue to refinance any short debt into long term debt and continue to renegotiate long term debt further into the future.
Banks do not care about of their business customer’s profits, banks only care about their business customer’s ability to make their debt payments! So AMD’s banks will look at AMD’s revenues, its revenue growth, its market share growth, any new market business that AMD gets, and AMD’s total debt load relative to AMD’s market cap, and total revenue growth projections. That Zen news is also good for more good faith lending from banks, as AMD will be getting some revenue growth from the server market that it lost a while ago. Zen does not have to beat Intel’s latest CPU cores outright, and AMD’s GPUs do not have to beat Nvidia’s GPUs outright, AMD only needs to be the price/performance winner in the mainstream GPU market, and the CPU/APU market. AMD’s server market share can only go up because AMD server market share is practically nil, and Zen will get AMD some more server market share, and Polaris will get AMD more mainstream GPU market share.
AMD is so lean from all those years of cutting back that it only needs to simply get back into the server business with Zen(some Zen server SKU revenues is better than none at all and in the server market AMD is already close to none at all)! And likewise AMD need to get more revenue/market share growth with Polaris in the mainstream GPU market to really turn things around. AMD needs to take all of that HBM2 it will be getting and put that towards its professional HPC/Workstation/server GPU accelerator products and HPC/workstation APUs on an interposer products because that’s where the real markups are and only use HBM2 for its flagship consumer SKUs. AMD is so far down that it can only go up, as long as Zen is near Haswell levels of IPC performance and Polaris can get more mainstream GPU market share and the revenues/revenue growth that goes with more market share in the GPU market. AMD only needs to worry about that price/performance metric and it does not even need to take any Flagship performance crown to do that.
I admire your efforts, but
I admire your efforts, but you shouldn’t bother bringing facts and logic when talking to him. He doesn’t care about facts and logic, he has AMD bashing to do.
What you say is true. With
What you say is true. With their IP they can stay in business until 2020 by not making a dime. That is when their 2.2 billion they refinanced is due to be paid. Are they going to get lucky and refinance that 2.2 billion yet again. So the answer is they need to start making some serious cash or it could be curtains for them.
I wouldn’t even wish that on AMD. Unlike AMD fanboys who fantasize about Nvidia’s demise.
You’re an idiot. Either you
You’re an idiot. Either you just don’t understand hyperbole, or you’re deliberately ignoring it and taking the statement literally because you know perfectly well I’m right and you can’t stand the idea of admitting it.
Never once did I say, or even hint, that I expected a $350 top-tier Titan XP-killing graphics card from AMD.
What I said was, it wouldn’t matter IF AMD released such a card at such a price, you still wouldn’t buy it because your Nvidia fanboyism is infinite. IF, by some strange, magical, completely impossible confluence of events, AMD just one day released an architecture that absolutely ripped Nvidia a new one, in every performance tier, in every price point, at every power consumption level – and let’s even pretend that they manage to be capable of handling all the Gameworks crap that was supposed to wreck them – you STILL wouldn’t buy them, because you’re a blind die-hard fanboy.
It is, and was, an incredibly hyperbolic hypothetical, and thus your entire response is completely irrelevant. You get no points.
Wow. Who’s the fanatic here.
Wow. Who’s the fanatic here. It isn’t me. I know you wouldn’t buy Nvidia so me making a bs statement on the same line as you did wouldn’t yield any meaningful exchange anyway. I wouldn’t waste my time even typing a response to your fanboyism.
…and I can buy 10 Kias for
…and I can buy 10 Kias for the price of 1 Ferrari. Whats your point?
well what does it mean that
well what does it mean that something is 120% slower ???
if the nvidia card does 60fps then the amd card would do what ?? negative 12 fps what is that.
25 fps vs 60 is 120%
25 fps vs 60 is 120%
no it is not. 25 is 42% of
no it is not. 25 is 42% of 60. 120% of 60 is 72.
so again something being 120% slower do not make much sense.
100% = double. 25×2 = (100%)
100% = double. 25×2 = (100%) 50 fps. 20% of 50 is 10 frames. So it is 120% slower at 25fps. 50 + 10 fps = 60 fps.
If I halve 60 fps to 30 which is 100% and 20% of 30 is 5 you still get 25.
I know it’s confusing but 120% of 60 would be 132 fps.
72 would only be 20% more frames than 60. Not 120%.
Hope this clears it up.
I know it’s confusing but
I know it’s confusing but 120% of 60 would be 132 fps.
wrong. it’s 72
72 would only be 20% more frames than 60. Not 120%
Correct.
Hope this clears it up.
No it do not.
when someone say X is “something” compared to Y. And that something result in a percentage then it’s the Y value that is the reference value.
so
X is 120% of Y. we take Y*1.2
X is 120% faster than Y. we take Y+Y*1.2
X is 120% slower than Y. we take Y-Y*1.2
It’s the last one that make Zero sense when we are on a scale that can not be lower than 0.
clearly the person made some mistake but what ?? did he mean 20%, did he just switch around X and Y what ?? its simply not possible to understand what it means to be 120% slower than something else.
From a numeric standpoint you
From a numeric standpoint you are correct. I did not mean the elementary crack towards you Kenjo. Sorry. It was directed at anonymous who constantly hounds me. They said 70%-120% faster. Of which 120% is 70% + 50%. We’ll use 100 to make it simple. 70% faster would be 170. 120% would make it 220. 100% faster would be 200. Even doing it the 100 plus 70% = 170 + 50% of 100 is 50. So 170 + 50 = same 220.
He could have said it gets 170%-220% of the performance of the lesser card and still be correct.
What a surprise, you’re wrong
What a surprise, you’re wrong again.
100% of 60 = 60
100% faster than 60 = 120
100% slower than 60 = 0
Also,
120% of 60 = 72
120% faster than 60 = 132
Exactly faster than is the
Exactly faster than is the qualifier. Not just computing raw numbers. 120% faster doesn’t mean take all and subtract 20% from it. So 60 frames take 100% all frames away = 0. 20% of 60 is 12. So -12 fps is logical. Go back to elementary school and learn math.
If he said it is 220% frame rate it is also double plus 20% purely from a numeric standpoint.
I’m done if you can’t get it you probably won’t anyway. It was obviously what they meant and not 20%-70% faster only.
So answer me this. Is 60 fps not 120% faster than 23-25 fps. I’m waiting. Nod your head yes instead of twisting things to feel that you are right.
The Titan is faster so you go
The Titan is faster so you go from the lower number and work up from there. No one said that the lower card got 60 fps. I just used the number for convenience. I used 60 fps as baseline and assumed that this would be taken to mean the Titanx was the 60 fps. So 23-25fps is indeed 120% slower than 60 fps or Titanx is 120% faster than it as well.
So maybe you miscomprehended it but I don’t think so.
Yes but RX480 is not for the
Yes but RX480 is not for the same market, Vega is for competing with Titan X. But I’d love to see 4 RX 480s using the Vulkan/DX12 optimized games and both Vulkan’s and DX12’s Multi-GPU adaptor ability, once all the kinks are worked out of Vulkan/DX12 multi-GPU adaptor and the gaming market has more Vulkan/DX12 optimized gaming titles. That Multi-GPU adaptor technology made available for the games to access via Vulkan/DX12 is going to be great for multi-GPU usage of cards like the RX 480, and even the GTX 1060(no SLI support from Nvidia)!
Yes the TOP flagship GPU SKU from AMD is getting a little long in the Tooth compared to Nvidia’s latest, but at least that TOP flagship GPU SKU from AMD from the older generation is getting some async-compute improvements while Nvidia’s long in the tooth former TOP flagship SKU is also not as powerful as the new Titan X(Pascal), and the long in the tooth former TOP flagship SKU from Nvidia has no ability to make use of async compute as some even older GPUs from AMD.
Doom Vulkan benchmarks?
Doom Vulkan benchmarks?
How is the mining of this
How is the mining of this thing?