2560×1440 Game Benchmarks
As we move on to the 2560×1440 results we'll revisit the standalone benchmark applications, starting with FFXV. All of the benchmarks to follow were run with the same settings as the 1080p results, with only the resolution changed.
The GTX 1660 Ti is effectively tied with the GTX 1070 in this benchmark at 1440p, with the advantage over the GTX 1060 6GB at ~37%.
Here we have what might end up being the worst result for the GTX 1660 Ti vs. the group in this review, as it comes up just short of the GTX 980 Ti for the first time with WoT: enCore. On the other hand this is just a 0.1 ms frame time difference and the cards are basically tied, and gains over the GTX 1060 6GB are ~32%, though it does trail the GTX 1070 here by 4 FPS.
Now we move on to the standard game benchmarks, beginning with the DX12 tests and Ashes: Escalation.
The gap between the GTX 1070 and GTX 1660 Ti widens with Ashes at 1440p, with the 1660 Ti trailing here by 5.8 FPS. The increase over the 1060 6GB is ~28% here.
Our next DX12 game is Far Cry 5.
Here the GTX 1660 Ti comes back to within 0.1 ms of the GTX 1070 average frame time, which is close enough to call it a tie. The 1660 Ti is ~40% faster than the GTX 1060 6GB here.
Next we have Shadow of the Tomb Raider, our final DX12 test.
This is the most impressive result so far for the GTX 1660 Ti, as it ties the GTX 1070 Ti here (though with reduction in overall smoothness compared to the 1070 Ti when looking at 99th percentile frame times). Big gains over the GTX 1060 6GB of ~49% as well.
We look at DX11 game benchmarks beginning with Middle Earth: Shadow of War.
This time the GTX 1660 Ti sits between the GTX 1070 and 1070 Ti in performance, and offers a nearly 42% increase over the GTX 1060 6GB.
Finally we re-visit F1 2018 at 1440p, our last DX11 benchmark.
Here the GTX 1660 Ti is just past the GTX 1070 with a 1.4 FPS edge, though when you consider that this is only about a 0.1 ms frame time difference it looks more like a tie (rounding does affect this slightly). The gains over the GTX 1060 6GB continue to impress, with an increase of nearly 40% here.
On the next page we'll see what sort of results the GTX 1660 Ti can achieve with a quick overclock.
OMG! GTX1660Ti can’t even
OMG! GTX1660Ti can’t even keep up with PS5/XBOXx2.
True but those are will
True but those are will probably launch in holiday 2020 which is 22 months away.
thats not even the problem…
thats not even the problem… Check out the Metro PS4 vs PC vs XB1 comparison videos. And then check out the PC requirements for that game. I swear to me it look like the PS4 version (not even the pro version, stock ps4) looks the best. This issue has been going on forever… i thought with the consoles being glorified PC’s that this BS would stop but this trend continues to happen.
Fine, it doesn’t have to look the best on the PC but don’t actually require more processing power for no reason.
They are… out?
They are… out?
I think this is the first
I think this is the first generation of main-stream cards that are actually targeting reasonable 1440p performance or high frame rate 1080p. There are many gamers out there right now that are moving into those segments, and it makes sense to target them with an affordable card.
Yes, but then the question
Yes, but then the question becomes, “what is affordable?”
With this slow creep of increased cost, generation by generation, when will these cards no longer be “affordable?”
I’m going to pull two quotes
I’m going to pull two quotes from the article, and then explain my thoughts.
“the GTX 1060 6GB, which launched at $249 ($299 for the Founders Edition) way back in July 2016”
“$279 might be seen as the inflation-adjusted price for such a card after nearly two years, but that sort of talk will not win me any friends in the lovely world of internet comments”
We are seeing a card for approximately the same street price as the GTX 1060 was 2.5 years ago at launch, that gives ~50% more performance. As I said before, this level of performance is getting into solid 1440p and high FPS 1080p range, which is awesome for gamers. This is a great deal, no matter how you slice it. Yes, it would be better if it was cheaper. That can be said for every product, but in the real world, inflation is a thing, and nVidia is not immune to its effects. The other thing that cant be missed is that launch prices almost always drop, and we will probably see these cards hit $249 sometime this year.
IMO I look at it that it
IMO I look at it that it trades blows with the 1070/1070ti for $100 cheaper than the 1070 launched at. 1660Ti also has a lower TDP, so easier to cool and cheaper to power. Seems like a great value to me.
The performance/price ratio
The performance/price ratio looks great but in time of recession, despite economist lies, cash is king and any cent is worth to spare.
As usual, NV$ try to deplete gamer’s pockets with a higher price tag!
I’m still waiting to see any GTX 1660 under 249 USD… as a true GTX 1060 replacement.
Nice review Sebastian, could
Nice review Sebastian, could you please add a power consumption chart?
Thanks
Yes! Need to add that. I’d
Yes! Need to add that. I’d started power testing at 2560×1440 and didn’t want to mix with older 1080p/ultra results since 1440 has higher draw. Will have that up by this evening as I get through the other cards. Ran out of time last night (aka passed out at around 2am).
Any support for variable
Any support for variable frame rate for freesync?
Great question, it should
Great question, it should since the requirement is Pascal and up but I’ll test with a freesync monitor and the latest driver today.
The NoTX Turing Raster Gaming
The NoTX Turing Raster Gaming edition Card has arrived and the RX590 prices will have to fall along with the Vega 56. And the Vega 56 results are conspicuously absent for the benchmarking comparsion charts and that Vega 56 MSRP is reported to have been strategically lowered to $279 by retailers(1).
So there has to be some updates to the benchmarking to include the RX Vega 56’s stock and overclocked results in the testing results because the price delta between the GTX 1660Ti and Vega 56 amounts to $0.
What AMD needs to also begin offering is some lower binned Vega 20 die based variant that lower binned than the Radeon VII. Vega on a 7nm process node can still offer more competition against Nvidia for raster only gaming at the higher end for AMD.
Any price/performance comparsion charts going forward need to reflect the current Vega selling prices and not MSRP.
(1)
“AMD Partners Cut Pricing of Radeon RX Vega 56 to Preempt GeForce GTX 1660 Ti”
https://www.techpowerup.com/252900/amd-partners-cut-pricing-of-radeon-rx-vega-56-to-preempt-geforce-gtx-1660-ti
“…..There is nothing fused
“…..There is nothing fused off here or disabled in software with TU116….”
See for me I was SURE this was going to be a “binning” part, but I’m truly surprised they actively designed/manufactured/distributed this part as part of a planned out road map, very interesting.
Just because the TU116 base
Just because the TU116 base die tapeout’s top binned part is the GTX 1060Ti does not mean that there can not be a lower binned variant derived from some defective TU116 Die samples. And as always as the result of the imperfect diffusion process there will be Defective DIEs no matter the Base die tapeout that is used.
Nvidia will die harvest any TU116 parts for some lower price segement for any TU116 based GTX 1600 series non Ti/lower variants like a GTX 1650/1630 variants that may be used for OEM PCs and such.
Nvidia is smart to release a line of Raster Gaming Only focuesd GTX 1600 series parts for that market segement that can still rely on all gaming titles having alternative code paths in the games and graphics APIs(Vulkan, DX12/DXR, and such) that will also make some Ray Tracing/AI features available via a shader core accelerated software code path for all non RTX branded Nvidia, and AMD/Intel graphics SKUs.
So you can be damn sure that if Intel/AMD do not have any RTX/AI like enabled GPU IP in their hardware that they with work with both Microsoft and the Khronos Group(Vulkan) to make sure there are alternative code paths for Ray Tracing and AI accelerated workloads doen on the GPU’s shader cores on any GPU SKUs that do not have in hardware support with dedicated Ray Tracing and Tensor cores IP. Nvidia will as well just to continue selling Pascal/NoRTX enabled Turing offerings that can still make use of any alternative code paths Graphics/Compute APIs for GPU SKUs that are not RTX enabled.
Think about this! Intel has acquired the FPGA maker Altera so Intel could very well programm an FPGA to do the Ray Tracing/Bounding Volume Hierarchy (BVH) calcuations as well as FPGA implemented Tensor cores. Ditto for AMD working in partnership with Xilinx for some of the very same functionality. Xilinx and AMD have already worked on some Epyc Based HPC platforms that are paired with Alveo U250 accelerator cards(1).
AMD, via Microsoft’s and Sony’s deep pockets, could have the ability make use of some Xilinx FPGA related IP to be integrated along with next generation AMD Console APUs. And AMD and say Xilinx FPGA IP could be interfaced via AMD’s Infinity Fabric(xGMI) IP and that all can be done via AMD’s EESC division woking in partnership with Microsoft and/or Sony and Xilinx/others for FPGA’s programmed to do what AMDs Console APU hardware can not do currently except on the slower shader core/software code path route.
I really wish that AMD and Xilinx would cosy up even more to better compete with Intel’s massive IP portfolio that’s going to include Diecrete GPUs in the 2020/later time frame. AMD’s advantage of having both GPUs and x86 CPUs under its IP umbrella is going to be coming to an end after 2020. So there will be Intel with CPUs, Descrete GPUs/Integrated graphics, and FPGAs/Memory Other PC Technology IP all under Intel’s umbrella.
Nvidia’s CEO has got to be having some extra sleepless nights pondering Intel’s Descrete GPU market entry what with Intel having the FPGA IP to include with its Descrete GPUs to be programmed for tasks like Ray Tracing/BVH and Matrix Math(Tensor Cores) before Raja’s/Intel’s teams Can get a more ASIC like answer to Nvidia’s RTX IP.
Just remember that AMD’s Pro Pender Software/Plugins support simultaneous CPU cores/GPU Shader Cores Ray Tracing accaleration and that will be what the Khronos Group uses for Vulkan with that code base modified to work inside Vulkan’s API vai some AMD extentions or most likely some Vulkan cross platform inclusion in the non GPU maker limited section of the Vulkan graphics API standard. Some Vulkan extentions that were once made for a single maker’s GPU hardware get adopted as non extentions into the full Vulkan csoss platform specification once the rest of the market begins making use of that extention. And that adoption can be GPU hardware based or software code path based.
(1)
“30,000 Images/Second: Xilinx and AMD Claim AI Inferencing Record”
https://www.hpcwire.com/2018/10/03/30000-images-second-xilinx-and-amd-claim-ai-inferencing-record/
It would be interesting to
It would be interesting to know nVidia’s gross margin on these, because the die size of the 1660ti is quite a bit bigger than the 1060 despite the smaller process (16 to 12nm). GDDR6 is also more expensive than GDDR5 which the 1060’s launched with, yet the price is largely the same. At face value it seems like nVidia is accepting lower margins than the 1060 commanded, which is uncharacteristic of them.
Look to all the
Look to all the TU116/Turing/GTX has with improved Shader Core/SM/Cache/other tweaks and the extre shader cores at that. Turing has a different shader core to SM ratio than GP106/Pascal. The Cache subsystems on Turing are improved above Pascal’s cache subsystems. There is so much more new IP on Turing even with the RTX IP excuded and really how hard is it to bring up TechPowerUp’s GPU database in 2 browser tabs, one for the GTX 1060 and one for the GTX 1660Ti! Turing even without the RTX IP is still going to be larger because of the transistor count need to enable all the improvments to Turing’s Micro-Arch even witout any RTX IP included. And did I say more shader cores!
This is the mainstream GPU market where, unlike the Flagship GPU market, sales volume is where more revenues are to be had. It’s not unrealistic because Nvidia knows that the Raster Oriented gaming titles will continue to rule the roost for a few more years. And AMD, and soon, Intel, along with AMD, will be offering some stiff competition. With AMD come Navi time and after that whatever Intel will be offering.
And thus, Nvidia, who only has GPUs and not much else producing the lion’s share of its revenues will really have mainstream GPU competition from 2 other players and not just one. Nvidia has to retain its GTX Raster Gaming line for that mainstream market(Dominated By Raster Oreinted gaming Titles Currently and for some time to come) in addition to its more costly RTX high end branded RTX line of products that Nvidia is hoping(Betting Billions On) to become a newer gaming standard for games that make use of Ray Tracing and AI in the GPU hardware.
RTX(Ray Tracing/AI-Denoining and AI-AA/Upscaling) is going to be a do or die thing for Nvidia because that’s what Nvidia has and will use to differentiate it GPU offerings until AMD and Intel have the time to catch up. Nvidia will spend billions to get as many gaming titles as possible over the next few years RTX Enabled. So GTX/Turing is a stop gap measure for Nvidia to retain mainstream GPU market share that’s currently based on raster performance and not any RTX performance just yet or even within the next year or possibily two.
Nvidia has only GPUs for its bread and butter unlike AMD(CPUs, GPUs) currently or Intel(CPUs, Discrete GPUs/Incoming, and Optane/Memory/TB3) come 2020. I’d expect to see more GTX/Turing(TU116) lower binned SKUs arriving as the Pascal Variants supply channels run dry.
Here are the GTX 1060 specs:
Shading Units: 1280 TMUs: 80 ROPs: 48 SM Count: 10
GPU Name: GP106
GPU Variant: GP106-400-A1
Architecture: Pascal
Foundry: TSMC
Process Size: 16nm
Transistors: 4,400 million
Die Size: 200 mm²
Here are the GTX 1660Ti specs:
Shading Units: 1536 TMUs: 96 ROPs: 48 SM Count: 24
GPU Name: TU116
GPU Variant: TU116-400-A1
Architecture: Turing
Foundry: TSMC
Process Size: 12nm
Transistors: 6,600 million
Die Size: 284 mm²
The more curious thing for me
The more curious thing for me is if there will be a mobile version of this card? This looks very intriguing in a laptop.
The prices in the table are
The prices in the table are plainly wrong mixing Founders Edition with partner models.
How many stock options did nVidia give for this review?
I wish you would add fan
I wish you would add fan noise measurements. I’m explicitly getting a 1660ti because of the low Watts, and therefore hopefully quiet fan noise.
Oh boohoo! Why can’t Nvidia
Oh boohoo! Why can’t Nvidia make a GPU that can run 3 4K screens at 120hz and then give it away for free? They’re rich, they can afford it. But they won’t, and you know why? Because they are EVIL and GREEDY!
Wait a minute.. I understood
Wait a minute.. I understood that the ‘Turing AND ray tracing was only going to be available on the 2080 series..
When did NVIDIA change that stance from as little as 6 months ago? I use NVIDIA cards for creating content using Octane render is the reason I ask…. the 2080s and in ‘Ti’ being well over priced this series, imo, is why I ask.