While I realize that it’s the other way around if anything, part of me wants to believe that NVIDIA released this new graphics card, the TITAN Xp, solely to prevent people from calling last year’s Titan X “Titan XP”. Alternatively, they could be trolling everyone, but doing so with a legit product launch.
The NVIDIA TITAN Xp is, finally, a fully-unlocked GP102 for the consumer market, which was previously exclusive to the Tesla P40 and Quadro P6000 graphics cards. The extra 256 CUDA cores and slight bump in boost clocks equate to an expected 10.7% increase in boost shader capacity (12.15 TFLOPs vs 10.97 TFLOPs). Memory bandwidth, for its 12GB of GDDR5X, has also increase from 480 GB/s to 547.7 GB/s, which is a 14.1% increase.
NVIDIA's blog post also mentions that macOS drivers are coming this month.
The NVIDIA TITAN Xp is available now from NVIDIA’s website for $1200 USD. 2016’s NVIDIA Titan X is also listed at $1200, but is out of stock for some weird reason… hmm. It’s almost like they released an all-around better product at the same price point.
Nice work, NVIDIA.
So when is Tom going to bring over a GP100 for you guys to benchmark?
We need a DX12, or Vulcan API
We need a DX12, or Vulcan API version of pong, to see what kind of frame rates we could get on that GPU.
Does this equate to ~10%
Does this equate to ~10% performance boost over a GTX 1080 ti ?
For $500, you must be getting something else ?
Like offering some driver features that the GTX doesn’t have ?
I wonder if AMD can disrupt this, like they did the $1000 CPU market with the R7…
If AMD can have a vega “pro” for $499 that match or beat a $1200 pascal, things would be very interesting.
TFLOPS is not a metric in and
TFLOPS is not a metric in and of itself even if it has a 10.97 TFLOPS(Now 12.15) in front of that TFLOPS acronym. So what is its: HP FP, SP FP, DP FP, or even 8 bit FP TFLOPS.
IF the RX 580 retails for less than $200 for some SKUs and the RX 580 is used for number crunching like in coin mining. Then I can get 2 RX 580s(At around $400) at 6.17 SP TFLOPS each, with 2 RX 580s offering 12.24 SP TFLOPS of compute power. And for the price of one Titan XP($1200) I can afford 6 RX 580s for a total SP FP TFLOPS 36.72 and that’s a lot of SP FP compute for $1200.
Now gaming that’s another matter as that requires ROPs/other resources. But the Vulkan/DX12 APIs will offer API managed NON(CF or SLI) mamaged multi-GPU for gaming/other uses load balancing and the developers are already starting to look into that Graphics API managed NON(CF/SLI) multi-GPU load balancing for gaming and GPGPU usage.
So those mainstream RX 580/other rebrands/refreshes will have their uses, even for gaming where the entire games/gaming engine industry will be making use of more of the DX12/Vulkan NON(CF/SLI) methods of GPU load balancing for gaming/other usage. As soon as one gaming engine maker makes good use of that DX12/Vulkan API managed NON(CF/SLI) multi-GPU load balancing feture set in the new graphics APIs well then things will be very different going forward.
GPUs are focused on 32-bit
GPUs are focused on 32-bit floating-point operations, so single-precision. Its FP16 and FP64 performance is very low: 1:32 for FP64 and 1:64 for FP16 (relative to FP32).
So if the SP FP ## TFLOP
So if the SP FP ## TFLOP statment is missing that “SP FP” In front of the ##/whatever then it defaults to 32 bits/SP TFLOPS. I’d still rather know the full metrics on the SKU, it’s not hard to cut and paste some more complete specs.
“1:32 for FP64 and 1:64 for FP16 (relative to FP32).”
does this ratio hold true for all makers’ GPU SKU?
Nope. It’s a die-by-die
Nope. It's a die-by-die comparison. NVIDIA's GP100, for instance, is the ideal 1:2:4 FP64:FP32:FP16. (FP64 is 1/2 FP32, and FP16 is 2x FP32.) It takes die room to connect registers with circuits that perform whole different instructions.
However, even with a smaller memory controller, it's 150 mm2 larger then the GP102 of TITAN Xp (600mm2 vs 450mm2). Since games are focused on FP32, this means that you are wasting 33% of your die area, which (because the error rate per wafer is ~constant) translates into fewer good chips, for a 0% gain in FP32 performance. So not only can you cut less chips from a batch, but you're going to be throwing out (or down-binning) a higher proportion of them.
This translates to more expensive and likely slower parts.
On the other hand, if your workload is FP64 (or FP16) then consumer- or professional graphics-focused chips only have a fraction of the registers hooked up to the circuits you need for your logic. In those cases, you'd want GP100 over GP102, even though they both have roughly the same FP32 performance.
Some chips are even weirder, like GK110/210 had an FP64:FP32 ratio of 1:3…
A pure business play. They
A pure business play. They are tweaking what they have to give a bit more space between the high end and the best they have to offer. There will always be customers who say “I want the best and i’m willing to spend the money to get it”. This product is for that segment.
Well it’s a bit overpriced
Well it’s a bit overpriced for the amount of SP FP TFLOPS that is offers! Now if that DP FP TFLOPS is up there relative to any other offerings maybe that’s a plus. But the real Pros that can write things off as a business expense will go for the Quadros and the Radeon PRO WXs. Because that error correction needs to be there least the pedestrian walkway/bridge may not do so well or that stock option trade price may be in error and cost millions of dollars in insurence claims or lost profits all around.
So for a pure Business Indemnity reason it’s best to go with the Quadro’s or the Radeon PRO WXs and pay to get that. It is all a valid business expense/tax write-off so that cost factor does carry so much weight if the error free operation is necessary from the production professional grade SKUs.
Maybe if Nvidia offered this with its professional drivers like AMD does with the Radeon Pro “Duo” SKU for some development usage types of scenarios where the devlopers can target software that uses the certified drivers on the lower cost hardware before doing the final testing on a Production certified True Pro GPU SKU.
So if you are a software development house you could purchase a few Radion Pro WX SKUs for final testing and certification of the RTM software product but the majority of the development work can be done with a lower cost GPU SKU(Radeon pro Duo) that comes with access to the Pro drivers. That’s how AMD marketed its “Radeon Pro Duo” SKU that now retailes for around $800.00.
It’s very unlikely that Nvidia will do that. So what pure business market will want the Titan Xp. This is more of a pure Nvidia Business play to make some more dollars at the expense of its end users for sure but trust fund kids have dollars to be seperated from as always.
milk baby milk.
milk baby milk.
the 1080Ti was supposed to be 10 faster 😀
so glad AMD doesn’t give much info about vega.
This is not for GPU
This is not for GPU peasants… just Suckers.
Nvidia’s marketing department
Nvidia’s marketing department reallly likes that name, but they should do something to differentiate one product from the other.
Like, maybe add more Xs. xXX_Titan_xXx
They should call it Titan Xm
They should call it Titan Xm (Xtra milking) :p
Anyway, only very few where probably expecting this, but it does makes sense. With 1080Ti being at $500 less and faster, only having 1GB less memory that will bother almost no one, the Titan X at $1200 was a joke. Nvidia had to update the Titan X card just to make a little sense. A $700 that is faster than the $1200 supposedly top card? They needed to change that.
Might be an early reaction to
Might be an early reaction to AMD VEGA coming out.
A non reference high end
A non reference high end 1080ti can match the performance if not beat that without throttling. Amp extreme anyone?