TU106 joins the party
How does the smallest Turing yet fair?
In general, the launch of RTX 20-series GPUs from NVIDIA in the form of the RTX 2080 and RTX 2080 Ti has been a bit of a mixed bag.
While these new products did give us the fastest gaming GPU available, the RTX 2080 Ti, they are also some of the most expensive videos cards ever to launch. With a value proposition that is partially tied to the adoption of new hardware features into games, the reception of these new RTX cards has been rocky.
To say this puts a bit of pressure on the RTX 2070 launch would be an apt assessment. The community wants to see a reason to get excited for new graphics cards, without having to wait for applications to take advantage of the new hardware features like Tensor and RT cores. Conversely, NVIDIA would surely love to see an RTX launch with a bit more praise from the press and community than their previous release has garnered.
The wait is no longer, today we are taking a look at the RTX 2070, the last of the RTX-series graphics cards announced by NVIDIA back in August.
RTX 2080 Ti | GTX 1080 Ti | RTX 2080 | RTX 2070 | GTX 1080 | GTX 1070 | RX Vega 64 (Air) | |
---|---|---|---|---|---|---|---|
GPU | TU102 | GP102 | TU104 | TU106 | GP104 | GP104 | Vega 64 |
GPU Cores | 4352 | 3584 | 2944 | 2304 | 2560 | 1920 | 4096 |
Base Clock | 1350 MHz | 1408 MHz | 1515 MHz | 1410 MHz | 1607 MHz | 1506 MHz | 1247 MHz |
Boost Clock | 1545 MHz/ 1635 MHz (FE) |
1582 MHz | 1710 MHz/ 1800 MHz (FE) |
1620 MHz/ 1710 MHz (FE) | 1733 MHz | 1683 MHz | 1546 MHz |
Texture Units | 272 | 224 | 184 | 144 | 160 | 120 | 256 |
ROP Units | 88 | 88 | 64 | 64 | 64 | 64 | 64 |
Tensor Cores | 544 | — | 368 | 288 | — | — | — |
Ray Tracing Speed | 10 GRays/s | — | 8 GRays/s | 6 GRays/s | — | — | — |
Memory | 11GB | 11GB | 8GB | 8GB | 8GB | 8GB | 8GB |
Memory Clock | 14000 MHz | 11000 MHz | 14000 MHz | 14000 MHz | 10000 MHz | 8000 MHz | 1890 MHz |
Memory Interface | 352-bit G6 | 352-bit G5X | 256-bit G6 | 256-bit G6 | 256-bit G5X | 256-bit G5 | 2048-bit HBM2 |
Memory Bandwidth | 616GB/s | 484 GB/s | 448 GB/s | 448 GB/s | 320 GB/s | 256 GB/s | 484 GB/s |
TDP | 250 W / 260 W (FE) |
250 W | 215W / 225W (FE) |
175 W / 185W (FE) | 180 W | 150 W | 292 W |
Peak Compute (FP32) | 13.4 TFLOPS / 14.2 TFLOP (FE) | 10.6 TFLOPS | 10 TFLOPS / 10.6 TFLOPS (FE) | 7.5 TFLOPS / 7.9 TFLOPS (FE) | 8.2 TFLOPS | 6.5 TFLOPS | 13.7 TFLOPS |
Transistor Count | 18.6 B | 12.0 B | 13.6 B | 10.8 B | 7.2 B | 7.2B | 12.5 B |
Process Tech | 12nm | 16nm | 12nm | 12nm | 16nm | 16nm | 14nm |
MSRP (current) | $1200 (FE)/ $1000 |
$699 | $800 (FE)/ $700 |
$599 (FE)/ $499 | $549 | $379 | $499 |
So finally, here we have the full GeForce RTX lineup (as currently announced), compared to their previous generation Pascal counterparts, as well as AMD's highest end option.
Taking a look at the TU106 GPU specifications, as found in the RTX 2070, there are a few aspects to note. First, is the continued use of GDDR6 memory. While the 10-series GPUs segregated the faster GDDR5X memory to the higher-end cards, Turing retains the GDDR6 controller, even in lower end parts.
Despite the similar CUDA core counts (albeit of a different design), the memory change from GDDR5X to GDDR6 alone gives the RTX 2070 a 40% improvement in memory bandwidth over the GTX 1080.
Additionally, the TU106 GPU found in the RTX 2070 still retains the RT Cores for real-time ray tracing, as well as the Tensor cores for deep learning acceleration. While TU106 features less of both of these bespoke cores than the bigger Turing GPUs, this means the RTX 2070 will be able to take advantage of the same RTX software features at the 2080 and 2080 Ti but with potentially less performance.
For this review, NVIDIA did not sample us the Founders Edition version. Instead, we received cards from NVIDIA partners such as EVGA, ASUS, and MSI. So far, the partners seem incredibly eager to showcase their RTX 2070 designs, based on the quantities of cards we are receiving, including the focus of the review today.
The EVGA GeForce RTX 2070 Black Edition
In general, pricing is something we save for the end of the review, presenting the product itself before diving into the value proposition it provides. However, given the vast array of RTX 2070 options out there, I think it's important to discuss why we went with the card we did for the "lead" position of this review.
While we've seen the NVIDIA strategy of pricing the Founders Edition card at a premium then promising less expensive card from partners for a few generations now (GTX 10-series, RTX 2080, RTX 2080 Ti), it seems like this launch will fulfill those price point promises.
Take for example the EVGA RTX 2070 Black Edition–at $499–it's a full $100 cheaper than the NVIDIA Founders Edition RTX 2070. Despite that price difference, the RTX 2070 Black Edition appears to make no compromises.
Featuring dual axial fans, the design of the RTX 2070 Black Edition is very similar to EVGA's higher-end RTX 2080 and 2080 Ti designs. It's clear this isn't a design purely meant to hit NVIDIA's "starting at" price tag. Given the relative issues that the RTX-series has had with its value proposition, we felt it important to take a look at the most inexpensive class of card to see if the story has changed at all.
It's also worth noting that this EVGA card is identical to the base RTX 2070 specifications in the table above, with a rated GPU Boost clock of 1620 MHz compared to the "factory overclocked" Founders Edition. This will help give us an idea of baseline RTX 2070 performance compared to looking at a highly overclocked partner card.
Looking closer at the design of the RTX 2070 Black Edition, we can see that the cooler is bigger than the PCB itself, with the whole card itself coming in at 10.6-in long. This is a way more manageable size than we've been used to lately with the higher end RTX-based products.
Power is supplied to the RTX 2070 Black Edition through the PCI-express slot, as well as an additional 8-pin power connector, for a total available power draw of around 180 Watts.
The RTX 2070 Black Edition feature the same video connector layout we've seen on the NVIDIA RTX 2070 Founders Edition, with one Dual Link DVI port, two DisplayPort 1.4 connections, one HDMI 2.0, and the new DisplayLink connection for connecting future VR headsets through a single wire.
Review Terms and Disclosure All Information as of the Date of Publication |
|
---|---|
How product was obtained: | The product is on loan from NVIDIA for the purpose of this review. |
What happens to the product after review: | The product remains the property of NVIDIA but is on extended loan for future testing and product comparisons. |
Company involvement: | NVIDIA had no control over the content of the review and was not consulted prior to publication. |
PC Perspective Compensation: | Neither PC Perspective nor any of its staff were paid or compensated in any way by NVIDIA for this review. |
Advertising Disclosure: | NVIDIA has purchased advertising at PC Perspective during the past twelve months. |
Affiliate links: | This article contains affiliate links to online retailers. PC Perspective may receive compensation for purchases through those links. |
Consulting Disclosure: | NVIDIA is not a current client of Shrout Research for products or services related to this review. |
RTX 2070 FE has dual link DVI
RTX 2070 FE has dual link DVI connector instead of dp connector too. So it’s exactly same display outputs as FE. Interesting pcb though, maybe they release mITX version of this card in the future.
Whoops, sorry about that.
Whoops, sorry about that. Article corrected!
This is the go to RTX card
This is the go to RTX card once all the Pascal options from the GTX 1080 and above are no longer available.
Let’s see if Nvidia waits on the RTX 1060 a little longer if there are any remaining GTX 1070Ti and below Pascal SKUs still remaining to be sold off. Nvidia only has to compete with Nvidia mostly and then it’s all RTX at higher prices after all the stocks of Pascal GPUs dry up.
It’s all good for Nvidia with AMD sitting out this round and not much except some rumored Polaris refresh-2 at 12nm. AMD is all in on 7nm but that’s next year. Nvidia will make bank this year on Pascal and Turing while AMD makes bank on Epyc, Ryzen, and Threadripper. AMD only has Vega and Pascal(For the sub $300 dollar market) for diecrete GPU sales opportunities.
Interesting enough, Vega in the form of Integrated Graphics will still have some interesting market share numbers for both Mobile and Desktop APUs and those figures will continue to grow larger regardless of the discrete GPU market that’s decidedly a Nvidia advantage. AMD has the advantage for the console market with XBONE, Sony, and that other semi-custom fist Chinese Console/PC that uses a Zen/Vega semi-custom APU SKU.
I can see AMD getting Tensor Core IP available long before AMD can compete with any Nvidia Ray Tracing IP. And this will be at Microsoft’s and Sony’s behest and because something similar to that Nvidia DLSS IP is just what the console makers would Really Want. So some new IP from AMD’s Semi-Custom folks in the form of AI/Tensor Core based Upscaling without needing more shader cores in order to do the Upscaling workloads. AMD’s Semi-Custom division is probably already been working on this long before Nvidia’s Turing Micro-Arch was fully Known to the public as Microsoft has been working with Nvidia for some time getting the DXR API engineering completed. Both MS and Sony are big users of Upscaling as is the console market in general.
Edit: AMD only has Vega and
Edit: AMD only has Vega and Pascal(For the sub $300 dollar market) for diecrete GPU sales opportunities
To: AMD only has Vega and Polaris(For the sub $300 dollar market) for diecrete GPU sales opportunities
Getting the Nvidia and AMD P names mixed up!
THere won’t be an RTX 1060.
THere won’t be an RTX 1060. It will be known as GTX 2060
Strange Brigade 1440p
Strange Brigade 1440p benchmark pic is the same as Sniper Elite 4. Please fix. Need that benchmark to compare.
Sorry! The graph should be
Sorry! The graph should be fixed now. Thank you for bringing this to my attention.
Thanks!
Thanks!
WQHD Performance Index for PC
WQHD Performance Index for PC Per’s GeForce RTX 2070 Launch Review
132.6% … GeForce RTX 2080 FE
105.6% … GeForce RTX 2070 Reference
100% ….. GeForce GTX 1080 FE
80.5% …. GeForce GTX 1070 FE
88.0% …. Radeon RX Vega 64 Reference
Index from 15 other launchreviews with an overall performance index of the GeForce RTX 2070 launch here:
https://www.3dcenter.org/news/geforce-rtx-2070-launchreviews-die-testresultate-zur-wqhd-performance-im-ueberblick
Great review thank you.
If
Great review thank you.
If the price for this card stays at this price I probably could see myself picking one of these EVGA 2070 Black cards up for my upgrade. At this price point it makes sense to pick one over a 1080 card. Even though I suspect the RT functions probably will be almost useless on the 2070 it would still be good to have the option there.
With all of that said I suspect the prices will be going up on these cards fairly quickly as the supply chain dries up for them.
The RTX features that
The RTX features that everyone is saying will be useless aren’t thinking about the resolution it will probably work at. I still use a 120hz 1080P Monitor for gaming. So for Ray Tracing, I’m sure it will do a lot better at 1080P vs 1440P or 4K.
500$ if you can get one at
500$ if you can get one at that price is far from being the sweet spot between performance and price. Ok maybe if you consider only RTX family then yes rtx 2070 offers best value from these cards.
I see all the data from this review and also from other places and I just can’t come to the conclusion to give this card a gold award. At 400$ then yeah it’s good. At 500$ it’s way overpriced.
It just very sad to see the price keeps creeping up every generation(especially this gen).
The RTX 1070 is faster than a
The RTX 1070 is faster than a GTX 1080 and you somehow think it should be sold for $50 under what the lowest price a GTX 1080 goes for on newegg. Get real!!!
clowns like you would justify
clowns like you would justify the 2080RTX being 100k$ because it’s 10000 times faster than a 3dfx voodoo banshee.
it’s good that the world is not only made of clowns like you.
enough of them to get the high end from 450$ to 1250$ in just the time it took to get from the GTX580 to the 2080 still.
Hmmm… what’s the MSRP of a
Hmmm… what’s the MSRP of a Vega 64? then why the RTX 2070 should be cheaper?
Because of generational leap.
Because of generational leap. I mean in regards to price-to-performance its almost same as polaris refresh. And i didn’t notice any rewards given to rx 580.
Is performance/price ratio
Is performance/price ratio the only parameter they look at to assign an award?
The RX580 didn’t offered any new feature or efficiency improvement, it was just a rebrand as you said. Why even consider a rebranded product for an award? God forbid 🙂
You mean the features that
You mean the features that aren’t available to consumer and couldn’t even been tested out? You can’t rate a product based on a promise (and you should not buy one either). Realistically ray tracing and DLSS are meaningfully available to consumer by the next gen launch.
Only feature you can test out is asyncronous compute, but this feature translates directly into performance numbers. So what else – efficiency. Yeah I’m all about that, but it doesn’t cut it when price-to-performance is not there.
That what I said RX580 didn’t deserve an award and rtx 2070 shouldn’t either.
Wait wait wait hang on. TU
Wait wait wait hang on. TU 106? You mean to say that what has historically been the x60 tier chip is now in the x70 GPU, with the price bump to match?
This is a £200 tier product, rightfully the RTX 2060, masquerading as high end.
As far as performance goes, I bought two of these already two years ago. Only mine support SLI so I can actually use them both.
Wait wait wait hang on. TU106
Wait wait wait hang on. TU106 die size is 445mm², just a bit smaller than the one of Vega 64 which has the same MSRP(the hair cooled one), the GP106 in the GTX 1060 is a much smaller chip at 200mm²
Small hint, production cost isn’t related in any way to the chip name…
Wow you’re totally right. If
Wow you’re totally right. If only Nvidia weren’t fully in control over their own chip design.
I hope whoever has a gun to Jen-Hsun’s head and is forcing his company to make such idiotic design choices is feeling bad about themselves…
NVIDIA should definitely hire
NVIDIA should definitely hire you as CTO and CEO given your incredibly deep knowledge in chip design and your forward-looking vision
They probably should tbh
They probably should tbh given the blunders they made this year and how everything I called ended up happening. This launch wouldn’t have been such a clusterfuck if they didn’t respond months late, and in an overly heavy-handed way to the mining boom. AMD, while unfortunate to release in the middle of it, were wise to not ramp up production in response to it. Now Nvidia has to get rid of an overly competitive Pascal before Turing is remotely compelling.
And as far as chip design goes, even a staunch corporate apologist such as yourself must recognise that selling a 775mm^2 to gamers is mental. If you are correct and Nvidia truly are so cash strapped that they have genuinely needed to double the price of every SKU in the four years since Maxwell, then they are truly screwing the pooch somewhere.
Ohhhh come on !!!!! Where’s
Ohhhh come on !!!!! Where’s the comparaison with the 1070ti a DAMN TI VERSION ????? Making comparaision with all cards but no 1070TI ARE YOU KIDDING ME PCPER ????????? Sorry.
GTX 1070 beat the performance
GTX 1070 beat the performance of the previous flagship (980ti) by approx 10% and was only $379 compared to the $650 launch price of the 980ti.
RTX 1070 is only 10% better then the non-flagship 1080 and costs the same $500.
How does this card rate any award at all? This is not a “sweet spot between performance and price”, it’s a terrible value for the money compared to the last generation.
Look Nvidia made RTX for the
Look Nvidia made RTX for the Pro Market and those folks do not need realtime that much. Look at the GP102 Psacal die that mas made for Pro usage first with Nvidia only later taking a Binned GP102 die and creating the GTX 1080Ti.
The GP104 base die tapeout was only meant for consumer GPU SKUs and the GTX 1080 had the full die while the GTX 1070 was from a binned GP104 die that did not have enough working units to be made into a 1080. GP106 was consumer also and the GTX 1060 came from that base die tapeout.
Now for TU102 the top bins of that base die tapeout will be used for Quadros as usual with the RTX 2080Ti still the lowest bin from that TU102 base die tapeout. But this time around the top TU104 base die tapeout starts wih a Quadro Variant and TU104 is not simply a Consumer Only tapeout anymore. The RTX 2080 is made from a Binned TU104 base die and has less resources than the top binned TU104 derived part that’s for Pro/Quadro market. The TU106 based RTX 2070 is not a binned TU104 variant this generation as the 2070 is the Top End TU106 variant this generation.
So Nvidia has Quadro as the major beneficiary of that RTX ray tracing technology and that Pro market will definitely make use of Ray Tracing(not necessarily in real time) and Tensor Cores.
Now Tensor Core technology will definitely benifit gaming more than just the Ray Tracing IP as that AI based upscaling still works even with the ray tracing turned off so gaming performance via DLSS will not be hindered like happens with Ray Tracing turned on.
In fact Nvidia’s “Real Time” Ray tracing RT cores can not produce enough rays in the 33.33(30FPs) down to 16.76(60FPS) frame times and below time ranges so Nvidia has to denoise that limited Ray Tracing output with a AI based denoising algorithm that’s running on the tensor cores. AI based Upscaling via the Tensor Cores based DLSS is the big game changer for Nvidia and faster 4K gaming via DLSS and 1440p output upscaled via DLSS to look more like native 4K and allow even higher frame rates at Upscaled-4K via the tensor cores based DLSS Trained AI.
So even more so this time around Nvidia major market focus is the professional market with even more Nvidia base die tapouts variants, TU102 and TU104, having Quadro in mind before any consumer binnied varints are made. TU106 is now the only current tapeout that is all consumer oriented from top to bottom most likely starting with the RTX 1070 and below.
Gamers may not need or want Ray Tracing but they will probably want DLSS as DLSS is not going to to slow down average frame rates. DLSS and 1440p output upscaled to 4K is going to result in higher average frame rates at 4K if that DLSS Technology works as Nvidia has stated.
You can almost be guaranteed to see AMD get some form of “DLSS” like Tensor Core IP in its semi-custom console APUs ASAP as Microsoft and Sony are heavily into using upscaling on their respective console offerings. That will be the first thing(Tensor Cores) that AMD would want for the professional AI market and consumer markets(AI based Upscaling).
Real Time Ray Tracing is not a priority for AMD as the professional Graphics/Animation markets can do all the ray tracing that they require in non real time accelerated fashon via OpenCL or CUDA on the GPU’s shader cores without the need for any in hardware Ray Tracing cores. Some Animation Houses are still using CPU clusters for Ray Tracing also where ray tracing workloads where done traditionally.
Edit: 33.33(30FPs) down to
Edit: 33.33(30FPs) down to 16.76(60FPS) frame times
To: 33.33ms(30FPs) down to 16.67ms(60FPS) frame times
There is no logic how it got
There is no logic how it got a gold award. But maybe the wallets area bit thicker…
Will PCPER be able to compare
Will PCPER be able to compare the EVGA RTX 2070 XC vs the RTX 2070 XC Ultra models? I’m interested in finding out the thermal performance difference between a 2 slot and 3 slot card is. Is it worth the extra $20 price difference?
Yip, I came here since I left
Yip, I came here since I left for a while and I see why I left. A rediculous article. While the net is hot about how overpriced epecially the 2070 is, you give it a gold award. Wow.
Bye then again. How much did nVidia pay you this time?
A friend got a 2080 by
A friend got a 2080 by surprise from Amazon and he doesn’t even have a computer. So i quickly offered him $500 and then after thinking about it, i switched it to $300… because if you are already playing in 1080 what am i going to gain. I could go 4k, pretty sweet, forza in 4k. But what else, i’m not going to jump to battle royale in 4k. Can’t predict that future games will run in 4k on the 2080. So $300 seems fair. This is the reality yet the video cards jumped in price. I guess people that want videocards pay whatever price the piper is setting. pcper Gold Award because it’s the same price as the crap that came out 2 years ago and 10% faster. Moore is rolling in his grave.
You need a videocard and have money to throw away, get the 2080ti, don’t be a chump
you might want to check the
you might want to check the vega 64 sniper elite 1440p results, it surely doesn’t get 50% worse at 1440 than 4k …
Hi Ken,
Thank you for this
Hi Ken,
Thank you for this review.
When Nvidia specifies “Ray Tracing Speed” does it say how many reflections, or what sort of color calculation is used? I assume they’re hitting triangles.