Earlier today, via a surprise message on Twitter, NVIDIA has officially announced the availability date for the RTX 2070—October 17th.
Beautiful any way you look at it.
— NVIDIA GeForce (@NVIDIAGeForce) September 25, 2018
The GeForce RTX 2070 will be available on October 17th. #GraphicsReinvented
Shop starting at $499 ($599 Founders Edition) → https://t.co/ammFWibyFy pic.twitter.com/IsScoXm5rZ
Based on the Turing microarchitecture, the RTX 2070 will include the same RT cores for Ray Tracing and Tensor Cores for deep learning as the RTX and 2080 Ti, albeit in different quantities.
RTX 2080 Ti | RTX 2080 | RTX 2070 | |
---|---|---|---|
GPU | TU102 | TU104 | ? |
GPU Cores | 4352 | 2944 | 2304 |
Base Clock | 1350 MHz | 1515 MHz | 1410 MHz |
Boost Clock | 1545 MHz/ 1635 MHz (FE) |
1710 MHz/ 1800 MHz (FE) | 1620 MHz / 1710 MHz (FE) |
Texture Units | 272 | 184 | ? |
ROP Units | 88 | 64 | ? |
Tensor Cores | 544 | 268 | ? |
Ray Tracing Speed | 10 GRays/s | 8 GRays/s | 6 GRays/s |
Memory | 11GB | 8GB | 8GB |
Memory Clock | 14000 MHz | 14000 MHz | 14000 MHz |
Memory Interface | 352-bit G6 | 256-bit G6 | 256-bit G6 |
Memory Bandwidth | 616GB/s | 448 GB/s | 448 GB/s |
TDP | 250 W / 260 W (FE) |
215W / 225W (FE) | 175 W / 185 W (FE) |
Peak Compute (FP32) | 13.4 TFLOPS / 14.2 TFLOP (FE) | 10 TFLOPS / 10.6 TFLOPS (FE) | ? |
Transistor Count | 18.6 B | 13.6 B | ? |
Process Tech | 12nm | 12nm | 12nm |
MSRP (current) | $1200 (FE)/ $1000 |
$800 (FE) / $700 | $599 (FE) / $499 |
While we don't have a full looks at the specifications yet, NVIDIA has posted some technical aspects on the RTX 2070 product page.
The RTX 2070 Founders Edition will be available for $599, with partners cards "starting" at $499.
so is it safe to assume that,
so is it safe to assume that, in terms of rasterization, this chip will perform similarly to the 1080, +/- 5%?
also, while there definitely existsa price point between the two, from a physical perspective, could there exist another sku that could sit between the Titan V and 2080ti? given the physical limitations etc etc, is that even feasible? Could they release a “gaming” variant of the Titan V? One with a allocation of resources more geared towards gaming workloads rather general compute?
” … from a physical
” … from a physical perspective, could there exist another sku that could sit between the Titan V and 2080ti? … ”
well, the 2080ti has 88 ROPs and this is usually tied to the amount of memory, and in this case of 11GB. The 1080ti is a cut down version of the Titan X(pascal) and just like the 2080ti, it also has 88 ROPs and 11GB of VRAM.
And we know the 1080ti(with 11GB and 88 ROPs) is a cut down version of the Titan(with 12GB and 96 ROPs), so with this logic in mind, Nvidia does have uncut versions of the TU102 chip.
I could be wrong, feel free to correct.
Just a matter of when they decide to do release it, my best guess is a year from now, probably towards teh end of the year. If AMD does release their 7nm consumer GPU’s next year, then that is probably right around that event is when Nvidia will release the Titan xt, otherwise sometime before holiday shopping season starts.
As far as Titan V goes? I highly doubt we’ll see anything other than it’s current iteration. I mean, they just released RTX and releasing anything volta related would be a step backward because Titan V lacks RT cores.
Which brings up a question about DLSS. When that finally gets put into games, will Titan V owners be able to make use of that feature in games due to the tensor cores titan V’s have?
“Beautiful anyway you look at
“Beautiful anyway you look at it”
Now they’re just trolling.
Shop starting at $499
Even
Shop starting at $499
Even youtubers are starting to call BS on Nvidia. Performance reviews don’t reflect what you’ll be buying at that price point.
TechPowerUp’s GPU databese
TechPowerUp’s GPU databese lists the RTX 2070 as TU106 based(1) with these figures:
Shading Units: 2304
TMUs: 144
ROPs: 64
SM Count: 36
Tensor Cores: 288
RT Cores: 36
Pixel Rate: 103.7 GPixel/s
Texture Rate: 233.3 GTexel/s
FP16 (half) performance: 14,930 GFLOPS (2:1)
FP32 (float) performance: 7,465 GFLOPS
FP64 (double) performance: 233.3 GFLOPS (1:32)
.
.
.
TechPowerUp appears to be missing that that 6 Giga Rays metric but maybe the other information can help fill in some of those missing figures on a provisional basis until all the dust settles.
And it’s a good idea to take any Giga-Rays figures and divide that by 1000(1000ms per second) and then miltiply that by 16.67ms(60 FPS) and 33.33ms(30 FPS) to get some rough idea of the numbers of rays available on a per frame time basis.
So At only 6 G-Rays per second that’s 6 billion divided by 1000(6 billion divided by 1000 equals 6 million) times 16.67 so that’s (100,020,000 Rays per Frame Time/60 FPS) or times 33.33(199,980,000 Rays per Frame Time/30 FPS) for not very many Rays at all when one thinks about the amount of pixels available at the various screen/texture resolutions used.
So those limited amount of Rays Cast must to be used for more than just Reflected and Refractied Rays but also Shadow Rays and Ambient Occlusion Rays if thoes parts of the render pass/passes are also need make use Ray Traced render pass methods instead of purely raster based methods. And really with all the other things that have to be done sequentially and can not be done in parallel on Turing’s various frame generation pipeline stages and that’s going to be less time than any available that the whole frame time slot before the final frame buffer is done and sent to the display(For Tensor Core hosted Trained AI based Denoising and DLSS/Other things).
It’s no wonder Nvidia has to make use of that AI based DLSS and AI based denoising to help denoise that limited Ray Tracing Output and sample things up from lower resolution what with the limited amount of Resources available at 16.67ms to 33.33ms for real time gaming using that “Real Time” hybrid Ray Tracing mixed with Rasterazition methods on RTX Turing!
(1)
“NVIDIA GeForce RTX 2070 Rev. A”
https://www.techpowerup.com/gpu-specs/geforce-rtx-2070-rev-a.c3252
Or if you don’t believe in
Or if you don’t believe in third party specs. see Nvidia’s Turing white paper:
https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf
Well TechPowerUp’s Database
Well TechPowerUp’s Database does say their RTX 2070 current figures are not finalizied but usually TPU gets close to the final specifications and updates the specs when the final numbers arrive. It’s not that I do not trust Third Party figures it’s just that some third parties do more research on getting closer to the complete and final figures and they do say that the figures are provisional.
I like the Nvidia Blogs sometimes also because there the Nvidia Engineers will even give the respective GPU Base die Tapeout’s various Shader to ROP ratios, Shader to TMU ratios, other ratios. And the Nvidia engineers ususlly do so on a per GPU variant basis and a per SM/Other unit basis for comparison and contrasting purposes among the Various Nvidia Base die Tapeouts like for example GP100, GP102, GP104, GP106/108 or whatever other variant of base die tapeout that Nvidia spins up and bins down to create their entire GPU product portfolio.
Ditto for AMD’s usually single/few Base die Tapeouts where AMD really needs to up their whitepaper content and AMD needs to start using a more unique base die tapeout naming scheme like Nvidia uses so folks can tell the difference between a Vega 10(The base die tapeout with 64 nCUs) and any Vega Graphics that may make use of 10 Vega nCUs on a APU and is also called Vega 10 by the press. So maybe VG100 for what AMD called that first big Vega 10(base die Tapeout) with its Shading Units: 4096 TMUs: 256 ROPs: 64 Compute Units: 64(nCUs) and then the Vega 10 Graphics like is in the APU(10 Vega nCUs), or the Vega 56 and 64 GPUs that are the made from Big Vega 10 Graphics die tapeout.
AMD’s coming out with another big Die tapeout on the 7nm node that they will call Vega 20(for the Pro market), so I hope there is not some APU variant that uses 20 Vega nCUs that the press will also call Vega 20, but it will be integrated graphics and cause naming confusion juat like Vega 10 name does. One can bet that at some Point in time that there will be enough Vega 20(Big Die) non performant DIEs that AMD will most likely spin up some consumer variant based of of the Big Vega 20 base die tapeout. And every processor maker binns any defective DIEs into lower binned SKUs to get their investment back off of those costly wafers.
TechPowerUp needs to create specific entries in its GPU Database that only describe the various Bses Die tapeouts like for Example the GP100, GP102, etc. base die configurations and not only just list the many derived GPU SKUs that come form the various Base die Tapeouts. I’d like to Know how much the Base Die Taprouts are overprovisioned and if that leaves any room down the line for any new GPU variants like the GTX 1080Ti that was introduced later on.
The GTX 1080Ti made use of the GP102 Base Die tapeout instead of the GP104 base die tapeout that was used for the GTX 1080. It was those 88 ROPs, out of GP102’s 96 available ROPs, that where alloted by Nvidia that gave the GTX 1080Ti the edge over both the Vega 64/56(only 64 ROPs) and The GTX 1080(64 ROPs MAX from the GP104 Base Die Tapeout) in pixel fill rates. Pixel Fill rates directly translate into higher average FPS metrics in gaming oriented workloads.
I Have read the Turing Whitepaper, and whitepapers lately are too full of the dirty hands of marketing! But just look at those Giga-Rays figures and do the Frame-Time math and that’s nowhere near enough Simulated Ray Interactions to replace but a small amount of the regular Raster Output that is still necessary for gaming/FPS workloads.
I mean look at the average 60 Watt Lightbulb and that’s:
“in order to emit 60 Joules per second, the lightbulb must emit 1.8 x 10^20 photons per second. (that’s 180,000,000,000,000,000,000 photons per second!)” (1)
And that’s just one light bulb so 10,000,000,000 Rays/Paths(Simulated not real) is not a very large number at all relative to reality. And Photon Paths and simulated interactions are for gaming very limited to where they can only bounce around and pick up some limited material’s color and refraction index/other relevent info and that’s no where near the amount of actual physics involved compared to what an actual light ray is subject to.
For Gaming that’s just Paths and Intersection Points and wherever a Ray Path intersects with a mesh that mesh’s material/s value is recorded along with any transparency and refractive/reflective other information before the ray continues along to whatever other simulated interaction repeats the process or records the final estimated values to pass along to some other steps in the render pipline.
Some more whitepapers will be available for the actual gaming engine developers under NDA, but that should not distract from the fact that 10 GigaRays is a pitifully small amount of actual Ray Paths and that’s having to be divided down and used to give, on average, the even smaller amounts of Ray Paths available per frame time.
Don’t get me wrong the amazing thing that Nvidia has done is not in the RT cores but in the Tensor Core based Trained AIs that are doing the denoising of the limited Ray output and the DLSS that’s really there so the whole scene can be rendered in a low enough resolution and upscaled to a higher resolution and still retain the necessary image quality without looking grainy or pixelated with jagged edges and full of holes.
Imagination Technologies did somthing similar with the PowerVR Wizard GPU hardware Based “Real Time” Ray Tracing Technology but they lacked the necessary capital to push their IP out to a wider market acceptance. So Nvidia can be/will be credited with being the one to get a similar Hybrid Ray Tracing IP pushed out into the diecrete GPU market for professional usage first and formost and also the consumer/gaming market because that’s where any non Performant Professional Turing die will end up going. And even TU104 has a Professional variant this time around that’s binned down to create the RTX 2080 with the RTX 2070 coming from the TU106 base die Tapeout this time around.
Nvidia has to get its yields for its Turing generation what with all those larger DIEs that Nvidia has to make use of in order to inlude all that extra RTX/Tensor core IP in the GPU’s hardware.
(1)[see Problem #2]
“Astronomy 101
Problem Set #6 Solutions”
https://www.eg.bucknell.edu/physics/astronomy/astr101/prob_sets/ps6_soln.html
Way too expensive
Way too expensive
Well that to be seen.
Well that to be seen. Performance should be well over gtx 1080, but let see the reviews first and how the pricing will mature when they are actual in-stock.
Huuuh?!
Its gonna perform
Huuuh?!
Its gonna perform near identical to a stock 1080, not over and certainly not well over.
For $500 a 2070 is not worth it at all when used 1080’s can be had for far less.
A 7nm refresh of the 20xx series will probably be worth while performance wise if they maintain similar pricing so just skip this generation of NV cards until then.
Well wait and see. But I can
Well wait and see. But I can almost guarantee RTX 2070 won’t be that much slower than RTX 2080, heck it has even the same memory bandwidth than RTX 2080. And RTX 2080 is 27% faster than gtx 1080 FE. To put that in perspective GTX 1080 FE is 17% faster than gtx 1070 FE, which is more crippled by shader ratio than 2070vs2080 and have less memory bandwidth than gtx 1080. More close performance ratio comparison would be gtx1070ti vs gtx1070(same mem, different amount of shaders), which makes percentages to 13% for the former.
So the safest bet: RTX 2080 will be ~15% faster than RTX 2070 and thus RTX 2070 will be well over the GTX 1080.
GTX 1070
Shading
GTX 1070
Shading Units:1920
TMUs:120
ROPs:64
SM Count:15
Pixel Rate: 107.7 GPixel/s
Texture Rate: 202.0 GTexel/s
FP16 (half) performance: 101.0 GFLOPS (1:64)
FP32 (float) performance: 6,463 GFLOPS
FP64 (double) performance: 202.0 GFLOPS (1:32)
Memory Size: 8192 MB
Memory Type: GDDR5
Memory Bus: 256 bit
Bandwidth: 256.3 GB/s
————————————————————
RTX 2070
Shading Units: 2304
TMUs: 144
ROPs: 64
SM Count: 36
Tensor Cores: 288
RT Cores: 36
Pixel Rate: 103.7 GPixel/s
Texture Rate: 233.3 GTexel/s
FP16 (half) performance: 14,930 GFLOPS (2:1)
FP32 (float) performance: 7,465 GFLOPS
FP64 (double) performance: 233.3 GFLOPS (1:32)
Memory Size: 8192 MB
Memory Type: GDDR6
Memory Bus:256 bit
Bandwidth:448.0 GB/s
.
Note: all information from TechpowerUp’s GPU Database
Hard to believe that the gtx
Hard to believe that the gtx 970 launched for $329 in 2014 with an SLI bridge. Curious to see if this card will be faster than the vega 64, looking forward to your review as usual.
“Beautiful anyway you look at
“Beautiful anyway you look at it”
Yes it’s beautiful, but I thought that was an old XFX designed!? I don’t see NV crediting XFX?! … so Pleasseee….