Specifications and Design
How does ASUS’s Flagship GTX 1080 Ti match up?
With all of the activity in both the GPU and CPU markets this year, it's hard to remember some of the launches in the first half of the year—including NVIDIA's GTX 1080 Ti. Maintaining the rank of fastest gaming GPU for the majority of the year, little has challenged NVIDIA's GP102-based offering, making it the defacto choice for high-end gamers.
Even though we've been giving a lot of attention to NVIDIA's new flagship TITAN V graphics card, the $3000 puts it out of the range of almost every gamer who doesn't have a day job involving deep learning.
Today, we're taking a look back to the (slightly) more reasonable GP102 and the one of the most premiere offerings to feature it, the ASUS ROG Strix GTX 1080 Ti.
While the actual specifications of the GP102 GPU onboard the ASUS Strix GTX 1080 Ti hasn't changed at all, let's take a moment to refresh ourselves on where it sits in regards to the rest of the market.
|RX Vega 64 Liquid||RX Vega 56||GTX 1080 Ti||GTX 1080||GTX 1070 Ti||GTX 1070|
|Base Clock||1406 MHz||1156 MHz||1480 MHz||1607 MHz||1607 MHz||1506 MHz|
|Boost Clock||1677 MHz||1471 MHz||1582 MHz||1733 MHz||1683 MHz||1683 MHz|
|Memory Clock||1890 MHz||1600 MHz||11000 MHz||10000 MHz||8000 MHz||8000 MHz|
|Memory Interface||2048-bit HBM2||2048-bit HBM2||352-bit G5X||256-bit G5X||256-bit||256-bit|
|Memory Bandwidth||484 GB/s||410 GB/s||484 GB/s||320 GB/s||256 GB/s||256 GB/s|
|TDP||345 watts||210 watts||250 watts||180 watts||180 watts||150 watts|
|Peak Compute||13.7 TFLOPS||10.5 TFLOPS||11.3 TFLOPS||8.2 TFLOPS||7.8 TFLOPS||5.7 TFLOPS|
If you'd like some additional details on the NVIDIA GTX 1080 Ti, or it's GP102 GPU, take a look at our review of the reference Founder's edition.
The GTX 1000 series of products from NVIDIA has marked a consolidation in ASUS's GPU offerings. Instead of having both Strix and Matrix products available, the Strix has supplanted everything to be the most premium option from ASUS for any given GPU, and the Strix GTX 1080 Ti doesn't disappoint.
While it might not be the largest graphics card we've ever seen, the ASUS Strix GTX 1080 Ti is more massive in all dimensions compared to both the NVIDIA Founder's Edition card, as well as the EVGA ICX option we took a look at earlier this year. Compared to the Founder's Edition, the Strix GTX 1080 Ti is 1.23-in longer, 0.9-in taller, and takes up an extra PCIe slot in width.
In particular, the Strix is difficult to fit in smaller form-factor cases that might have the available length but are more constrained on GPU height. However, if you have a mid or full tower chassis and don't care about GPU size, the massive cooler on the Strix has a lot to offer.
The cooler design of the Strix GTX 1080 Ti is based on a massive heatsink array with six heat pipes, mated to a triple fan setup. By taking up 2.5 slots, ASUS is able to implement a taller heatsink with more surface area for cooling.
This is evident by what ASUS calls "0dB Technology." Essentially, the fans on the graphics card won't turn on until the GPU reaches 55 degrees Celsius. We observed this working as expected during our time with the Strix 1080 Ti. This would be great for running less demanding or older titles with V-Sync on, which wouldn't tax the GPU very much and allow for near silent operation.
Another interesting cooling feature found on the Strix GTX 1080 Ti involves the addition of two 4-pin fan headers on the end of the card. These fan headers, in conjunction with ASUS GPU Tweak II software, allows you to set custom fan curves based on GPU and/or CPU temperature for connected case fans. This feature would be great for front intake fans on a case, so you could ramp them up when your GPU is hot in order to push more air to the card.
Of course, as with any flagship computer hardware product these days, RGB is available in full force with the Strix GTX 1080 Ti. Not only are there built-in RGB LEDs on the fan shroud itself, but there is s header for an RGB LED strips on the card. This could be good if you want to sync additional LEDs with your graphics card, or if your motherboard doesn't support RGB expansion.
While the Founder's Edition GTX 1080 Ti sports one 8-pin and one 6-pin PCIe power connector, the Strix 1080 Ti bumps that to two 8-pin connectors, allowing for an extra 75W of power draw while overclocking.
A much-welcomed revision from the NVIDIA Founder's edition, the Strix 1080 Ti adds a dual-link DVI connector back to the mix along with 4 DisplayPort connectors and a full-size HDMI port.
Can you test the 1080 Ti/Vega
Can you test the 1080 Ti/Vega 64 with pigtail PSU connectors vs. separate 8 pin cables? See if there’s any difference in overclocking headroom?
Why would they do that? They
Why would they do that? They are running a business and barely anyone would be interested in that.
The benefits of going with
The benefits of going with separate 8 pins is for cleaner power to the card if the card is demanding more power than the PSU can provide on those lanes.
There won’t be any difference if it’s hitting the top most stable OC setting as it is. If it becomes unstable due to power fluctuation, separate cables can help reduce those issues.
To redo all the benchmarks just to test this, isn’t beneficial as most don’t bother hand picking the cables, unless they seek optimal OC performance. And that increase could be 10Mhz, which won’t show in the graph.
What’s new in the PCB design?
What’s new in the PCB design? http://www.tomshardware.com/news/asus-rog-strix-gtx-1080-ti-pcb,36181.html
We’ve actually had this card
We've actually had this card for a while, so it's the first revision, but we've reached out to ASUS for clarification/details!
88 ROPs and 24 more ROPs than
88 ROPs and 24 more ROPs than the Vega 64/56 and the other Nvidia SKUs in that table and the GTX 1080Ti will always win the FPS benchmarks. That and because of the overall market for GPUs being influnced by compute as well as and Nvidia/AIB partners can keep the prices well above that MSRP and earn more money for their investors.
So with the GPU compute market keeping AMD’s Vega prices inflated Nvidia can bring in plenty more revenues to its share holders by moving that MSRP higher or setting its MSRP up as a range of values depending on demand. GPU makers should list their GPU MSRP pricing higher to begin with and that will give the makers more latitude to price for higher revenues. It’s because that way if pricing drops it can go lower than MSRP and make it appear that the pricing is a deal to consumers. Both Nvidia and AMD do not appear to be having problems keeping their inventories as small as possible with AMD never having enough stocks of Vega to meet demand.
Nvidia’s GP102 has an excess of available ROPs to add for gaming and Nvidia really should hold Volta in reserve longer for any gaming markets until it’s absolutely needed. Nvidia can extend Pascal’s usage because GP102 still has extra ROPs to make available for any increases in the FPS metrics until the Vega refresh SKUs are on the market at 12nm! That’s because if AMD can not match Nvidia in ROP counts there is no way AMD can get its FPS metrics higher even using higher clock rates. AMD needs a reworked Vega base die design with 88+ ROPs.
NVidia and AMD sell their
NVidia and AMD sell their GPU’s to card makers (for the most part) like Asus at a PRE-AGREED price in bulk.
So NVidia and AMD don’t benefit from sudden price increases for cards due to crypto-currencies or whatever. That’s mostly the RESELLERS like Amazon or Newegg.
I don’t know what your point is about “keeping their inventories as small as possible” however both companies would prefer to sell as much as they can.
However, fabrication plants take orders several MONTHS in advance and in addition they can only ramp up so much anyway due to other chip commitments.
AMD doesn’t need a “reworked Vega base die design with 88+ ROPs”… do you suddenly know more than the AMD engineers?
The raw numbers don’t tell you everything, but graphics cards generally have everything BALANCED as it would be stupid to have too little of something that made a big bottleneck thus wasting money spent in another area.
You can’t just look at the CUDA CORES and think that the ROP count should be the same as well for a different architecture.
You’ll always find scenarios where more ROPs might help, or some other part of the card but again it’s about BALANCING everything within price and power constraints. Plus, the design needs an optimized DX12 or Vulkan title to really show off Vega.
Nvidia and AMD can both
Nvidia and AMD can both increase their wholesale/AIB partner pricing during the next round of purchasing contracts to cover increased expenses and AIB partners will have to pay more. Both AMD and Nvidia need to increase these prices to cover their parts(DRAM mostly), R&D, and driver development costs and the AIB partners will not want to lose those bit coin markups so they will pay any resonable price increase or they will not get the Dies. I’m pretty sure that AMD has plenty of AIB partners competing for the avaliable Vega 10 bease dies that are used to create the Vega 64/56 SKUs and it’s a seller’s game currently with demand so high so the ball is not in the AIB partner’s hands.
ROPs are the very thing that put out the pixels on GPUs, no matter the crappy quality of the frames, and that’s a fact. So more ROPs equate to higher pixel fill rates and more frames per second from whatever GPU has the most ROPs. All the rest of that other GPU IP can be tweaked to get the proper data fed into the Raster Operations Piplines on those ROPs so Nvidia can get more frames flung out there with lesser quality and who will notice as long as there are no jagged edges for the eye(Brain Actually) to notice at 30-60+ FPS with the proper frame variance that makes for smooth frame delivery.
Nvidia can and does get the better FPS metrics because that’s the metric that is tested and if any reviewer digs too deep looking at image quality that the eye may not notice anyways at such rapid frame rates then that reviewer will have to purchase the review samples themselvs and not get any freebees from the marker with those review strings attatched.
ROPs are the Nads of the gaming focuesd GPU and the GPU with the most Nads wins that all valued Bubba Gamer FPS metric that takes the benchmark race and sells the GPUs for gaming workloads.
Price goes up with as a function of the damand curve so Cherge By the ROP for gaming SKUs! ROPs ROPs ROPs!
Actually sometimes integrators like Asus evga Msi Zotac etc do charge more based on supply and demand from things like crypto currency mining.
Thanks for the article Ken.
Thanks for the article Ken. There’s one thing I wanted to mention though – the gtx 1080ti strix is made in 3 different part numbers that come with different clocks. 8G, A8G, O8G. I mention this because consumers reading the review really should know which version you tested for your results. Going off memory I believe the O8G is the fastest with the “O” standing for overclocked.
Also the regular Strix cards
Also the regular Strix cards have their max power limit limited to 112%, so the OC version has a higher power limit (120%) available for overclocking!
That is what helped me to make my decision to get the Strix 1080 OC over the regular Strix 1080!
Vega 64 needs a air-cooled
Vega 64 needs a air-cooled disclaimer to make it clear. The water cooled version is much master than a 1080
How does the card perform in
How does the card perform in Dx12 games like The Division ?
And with Vulkan ? like wolfenstein ?
I had the strix and now have
I had the strix and now have the original ftw, I like the different sensors on the evga cards better plus the current ftw elite comes with 12 GHz memory factory overclock.