GP104 Strikes Again
The Founders Edition of the GeForce GTX 1070 is here. Is this your next mid-range card?
It’s only been three weeks since NVIDIA unveiled the GeForce GTX 1080 and GTX 1070 graphics cards at a live streaming event in Austin, TX. But it feels like those two GPUs, one of which hasn't even been reviewed until today, have already drastically shifted the landscape of graphics, VR and PC gaming.
Half of the “new GPU” stories are told, with AMD due to follow up soon with Polaris, but it was clear to anyone watching the enthusiast segment with a hint of history that a line was drawn in the sand that day. There is THEN, and there is NOW. Today’s detailed review of the GeForce GTX 1070 completes NVIDIA’s first wave of NOW products, following closely behind the GeForce GTX 1080.
Interestingly, and in a move that is very uncharacteristic of NVIDIA, detailed specifications of the GeForce GTX 1070 were released on GeForce.com well before today’s reviews. With information on the CUDA core count, clock speeds, and memory bandwidth it was possible to get a solid sense of where the GTX 1070 performed; and I imagine that many of you already did the napkin math to figure that out. There is no more guessing though – reviews and testing are all done, and I think you'll find that the GTX 1070 is as exciting, if not more so, than the GTX 1080 due to the performance and pricing combination that it provides.
Let’s dive in.
The GeForce GTX 1070 – An only slightly mutilated GP104
The setup for this review is going to be a lot quicker than was the case with the GTX 1080. We already know about the architecture, the new features and how it ticks. At this point, we have only a couple specification changes and a memory swap to worry about.
|GTX 1080||GTX 1070||GTX 980 Ti||GTX 980||GTX 970||R9 Fury X||R9 Fury||R9 Nano||R9 390X|
|GPU||GP104||GP104||GM200||GM204||GM204||Fiji XT||Fiji Pro||Fiji XT||Hawaii XT|
|Rated Clock||1607 MHz||1506 MHz||1000 MHz||1126 MHz||1050 MHz||1050 MHz||1000 MHz||up to 1000 MHz||1050 MHz|
|Memory Clock||10000 MHz||8000 MHz||7000 MHz||7000 MHz||7000 MHz||500 MHz||500 MHz||500 MHz||6000 MHz|
|Memory Interface||256-bit G5X||256-bit||384-bit||256-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)||4096-bit (HBM)||512-bit|
|Memory Bandwidth||320 GB/s||256 GB/s||336 GB/s||224 GB/s||196 GB/s||512 GB/s||512 GB/s||512 GB/s||320 GB/s|
|TDP||180 watts||150 watts||250 watts||165 watts||145 watts||275 watts||275 watts||175 watts||275 watts|
|Peak Compute||8.2 TFLOPS||5.7 TFLOPS||5.63 TFLOPS||4.61 TFLOPS||3.4 TFLOPS||8.60 TFLOPS||7.20 TFLOPS||8.19 TFLOPS||5.63 TFLOPS|
NVIDIA has reduced the CUDA core / processor count on the GTX 1070 from 2560 to 1920, a drop of 25%. This is actually a bigger drop than the GTX 970 withstood coming from the GTX 980 (~19%) so there is the potential for a larger performance disparity this generation. The 1920 core count is still higher than the GTX 970 (1664), and with a significantly higher clock speed there is no doubt which card is going to be faster.
Texture unit count drops from 160 on the GTX 1080 to 120 on the GTX 1070 thanks to a loss of five SMs. It looks like NVIDIA has disabled one complete GPC rather than disabling SMs piecemeal across the GPU. ROP count remains the same though at 64, and of course the memory bus stays at 256-bit to go along with them.
Speaking of the memory controller, even though it is architecturally identical between the two Pascal products, the GTX 1070 is using 8GB of GDDR5 rather than the newer, faster GDDR5X found on the GTX 1080. Frequency is reduced from 10 Gbps to 8 Gbps and memory bandwidth drops from 320 GB/s to 256 GB/s. That being said, this is the first GPU to ship with 8.0 GHz GDDR5 memory, so that is still an increase over the data rate the GTX 980 produced at stock. Couple that with the improved memory compression on Pascal and there shouldn’t be any concern over the memory design on the GTX 1070.
Out of the box clock speeds on the reference / Founders Edition of the GTX 1070 are set at 1506 MHz base, and 1683 MHz Boost. Though they are slightly lower than the GTX 1080, the increase over the GTX 970 is substantial – nearly 50%! Doing the math in your head should already give you a clue to performance: 15% more cores and 50% higher clock rates than the GTX 970 should give the GTX 1070 a big step forward for the price segment compared to Maxwell.
As to power consumption, the GTX 1070 uses 30 watts less than the GTX 1080, with a TDP of 150 watts according to NVIDIA specifications. That is 5 watts higher than the GTX 970, though I imagine any gamer would give up 5 watts for the performance delta we are seeing. Power is again supplied by just a single 8-pin connector.
Architectural Changes and Features Recap
For both my sanity and yours, I’m not going to attempt to retell the story surrounding Pascal and the GP104 GPU. There are some slight architectural changes including the addition of a new geometry processing block that enables simultaneous multi-projection and some intricate path modifications to allow the 16nm process to reach these kinds of crazy clock speeds, but GP104 exhibits the exact same FLOPS per clock that Maxwell does.
There are several new features supported on Pascal that are worth noting as well, but were covered previously in the review of the GeForce GTX 1080.
- Asynchronous compute improvements for scheduling and preemption
- Simultaneous Multi-Projection to improve VR performance and fix Surround gaming fisheye effects
- SLI got faster but is only recommended for two GPUs now
- GPU Boost 3.0 is improved with new overclocking capability for voltage curves
- HDR and video support gets upgraded
If you haven’t caught up on these technologies I would actually encourage you take a slight detour to those pages linked above and give them a read. They are significant and noteworthy additions to the GeForce product stack and I am particularly excited to try out SMP in some working titles.
(PS – if you want, we interviewed NVIDIA’s Tom Petersen on GTX 1080 launch day during a live video stream and you catch up on the technology via this YouTube playlist alternatively.)
Based on my observation of
Based on my observation of the GTX 970 and 980 releases, I have a feeling that the GTX 1070 will be the best value. And anyone who buys a GTX 1080 will regret it once the 1080 Ti’s releases. Personally I may end up getting just one 1080 just to try it out for gaming and folding@home, but I’m really eager to see what Nvidia brings to the table with the Titanium release.
The link on the “Testing
The link on the “Testing Suite and Methodology Update” page in this paragraph:
“For those of you that have never read about our Frame Rating capture-based performance analysis system, the following section is for you. If you have, feel free to jump straight into the benchmark action!!”
jumps to the 1080 review.
I was properly confused for a few seconds when I didn’t see any 1070 data on the page.
@Allyn: What would you think
@Allyn: What would you think about frame time weighted frame time percentile graphs? Like in the SSD reviews?
Just a joke, I don’t think it matters that much in this data since the variance is not multiple orders of magnitude here.
Ryan and I actually had this
Ryan and I actually had this conversation the other day. It could come into play with the percentile plots, but things would need to be presented a bit differently. It would help spread cards with greater variation out of the pack a bit more, but as it stands now, cards that misbehave tend to misbehave badly enough that we don't need to weigh it any differently to make it obvious.
These new power measurements
These new power measurements are amazing, thanks pcper for keeping on it, pushing measurement methods and supplying us with sensible data.
(however, I think the particular page mixes Hawaii, Fiji and Tahiti as others have also commented on)
“Testing suite” page:
>> As a
“Testing suite” page:
>> As a result, you’ll two sets of data in our benchmark pages
>> As a result, you’ll word
>> As a result, you’ll word missing two sets of data in our benchmark pages.
I know it’s already alot of
I know it’s already alot of work, but can we have some openCL or blender benchmarks? or even just from preimer pro testing
not all people game
and, well, also for the 1080 please! >.<
Using Chrome atm. When I
Using Chrome atm. When I click on a picture, the pictures tend to look a bit weird. Like, with the power graph when I click on it, the picture isn’t centered on the page. When I click on the bar graph, the picture is super large.
Would anyone be able to say
Would anyone be able to say if one could pair this GPU with a 980ti since they are comparable in performance and are pretty much the same architecture?
Unlikely nVidia would let you
Unlikely nVidia would let you do it. Might work in something like Ashes of the Singularity but betting other developers will do a similar version of multi-card rendering doesn’t seem like a sound plan.
Why single out power used by
Why single out power used by graphics card alone?
As long as GPUs need driver executed by CPU it does nor make sense to me.
Great review yet again Ryan.
Great review yet again Ryan. Just a heads up, the link to the benchmarks on page 3 sends one to the 1080 page.
Are the other cards used in
Are the other cards used in the comparison overclocked?
Why does this site still use
Why does this site still use the stupid tiny lines? Why can’t you just put the damn FPS numbers down and be done with it! I hate looking at very tiny lines just to get a idea of performance! This is a huge reason why I stopped coming to this site for reviews!
Ryan.. would you agree that
Ryan.. would you agree that nVidia probably made the 970 too good of a deal for what you got? As it seems there is more differences between the 1070 vs 1080 this time around.
If nVidia could change history, they probably would have either made the 970 not as fast or more expensive.
@Ryan Shrout, can u do
@Ryan Shrout, can u do another review regarding MICRON & SAMSUNG VRAM for GTX 1070 again?
there’s some fiasco like previous GTX 970 3.5GB VRAM & guess what now is bout the brand.
obviously, every reviewers cherry picked with SAMSUNG chip & how come there’s no MICRON chip for review??? thanks.