Synthetic and 1920×1080 Game Benchmarks
PC Perspective GPU Test Platform | |
---|---|
Processor | Intel Core i7-8700K |
Motherboard | ASUS ROG STRIX Z370-H Gaming |
Memory | Corsair Vengeance LED 16GB (8GBx2) DDR4-3000 |
Storage | Samsung 850 EVO 1TB |
Power Supply | CORSAIR RM1000x 1000W |
Operating System | Windows 10 64-bit (Version 1803) |
Drivers | AMD: 18.50 NVIDIA: 417.71, 418.91 (GTX 1660 Ti) |
Note: GTX 1660 Ti benchmarks on the next two pages were made using the MSI GAMING X, which arrived first and was the focus of testing early on. This card has a higher TDP and boost clock, and should be representative of what many partners will release as this is an AIB-only launch (no reference cards).
Synthetic Benchmarks
The usual disclaimer about synthetic vs. real-world gaming results applies, of course; though it is sometimes handy to refer to a list of controlled benchmark results like this. As with all benchmarks presented in this review, each test was run three times with the results averaged to achieve the score/frametime/FPS number you will see on the charts below.
We begin with 3DMark Time Spy, run at the stock settings. This is a DX12 test which renders at 2560×1440.
If the GTX 1660 Ti can hold on to this position it will be quite an impressive card at $279, but the results to follow help illustrate why synthetics do not directly translate to in-game performance.
Next is Unigine Superposition, run here using the 1080p high preset. This is a DX11 test.
Here the GTX 1660 Ti is behind the GTX 1070, though only by 80 points in overall score. This is still a significant drop from the 3DMark Time Spy result, but that test is also DX12 and 1440p. So what happens when we run Superposition, increasing the resolution to 1440p and using the same settings?
Not much changes, actually. At least as far as the GTX 1660 Ti's position relative to the GTX 1070 goes. But this card is more of a replacement for the GTX 1060 6GB, and in that regard it shows significant gains in these synthetic tests. What about in games? That's next.
1920×1080 Game Benchmarks
As with the previous GeForce RTX 2060 review, we have tested the GTX 1660 Ti with a mix of DX12 and DX11 games, with a couple of standalone benchmarks to round things out. We will begin with these self-contained benchmarks, which are Final Fantasy XV (run at the "standard" preset to reduce the impact of NVIDIA GameWorks optimizations) and World of Tanks enCore (run at the "ultra" preset).
The GTX 1660 Ti manages to average 1 FPS more than the GTX 1070 in FFXV at 1080p, and is a 23.7 FPS ahead of the GTX 1060 6GB, an increase of ~37%.
Things are a bit different with the World of Tanks enCore benchmark, as the GTX 1660 Ti is between the GTX 980 Ti and GTX 1070 here. In this test the 1660 Ti has a ~35% advantage over the 1060 6GB, however.
Moving on to the standard game benchmark results, we will begin with the DX12 tests. First up is Ashes of the Singularity: Escalation, run at the high preset.
In this game the GTX 1660 Ti has its lowest showing so far, trailing the GTX 1070 results by 7.3 FPS and with its lead over the 1060 6GB down to ~19%.
Far Cry 5 is next, also run at the high preset:
Here the GTX 1660 Ti and GTX 1070 are effectively tied, and the increase over the 1060 6GB is up to ~36%.
The final DX12 test is Shadow of the Tomb Raider, which was also run at the default "high" settings. While the first two DX12 games are better optimized for AMD hardware than many of our other tests, this game certainly has a reputation of being an NVIDIA-friendly benchmark. We will see how that affects things with the GTX 1660 Ti.
As impressive as NVIDIA cards tend to look vs. AMD in this game (again, this is more NVIDIA optimized than other titles), among the GeForce cards the GTX 1660 Ti is back to trailing the GTX 1070, but by the slimmest of margins. Performance vs. the GTX 1060 6GB shows an increase of ~32% here.
And now we move on to the DX11 games, beginning with Middle Earth: Shadow of War. This was run with standard "high" settings.
In this game we have the GTX 1660 Ti taking the lead by 3.5 FPS over the GTX 1070, with the increase over the GTX 1060 6GB up to ~46% here.
Finally we'll look at results with F1 2018, another DX11 title and our last test at 1080p.
A similar result here compared to ME: Shadow of War, as the GTX 1660 Ti seems to favor DX11 titles at least in this group. Increase over the GTX 1060 6GB is ~43% in this test.
On the next page we'll see how the GTX 1660 Ti fares when we move up to 2560×1440 resolution.
OMG! GTX1660Ti can’t even
OMG! GTX1660Ti can’t even keep up with PS5/XBOXx2.
True but those are will
True but those are will probably launch in holiday 2020 which is 22 months away.
thats not even the problem…
thats not even the problem… Check out the Metro PS4 vs PC vs XB1 comparison videos. And then check out the PC requirements for that game. I swear to me it look like the PS4 version (not even the pro version, stock ps4) looks the best. This issue has been going on forever… i thought with the consoles being glorified PC’s that this BS would stop but this trend continues to happen.
Fine, it doesn’t have to look the best on the PC but don’t actually require more processing power for no reason.
They are… out?
They are… out?
I think this is the first
I think this is the first generation of main-stream cards that are actually targeting reasonable 1440p performance or high frame rate 1080p. There are many gamers out there right now that are moving into those segments, and it makes sense to target them with an affordable card.
Yes, but then the question
Yes, but then the question becomes, “what is affordable?”
With this slow creep of increased cost, generation by generation, when will these cards no longer be “affordable?”
I’m going to pull two quotes
I’m going to pull two quotes from the article, and then explain my thoughts.
“the GTX 1060 6GB, which launched at $249 ($299 for the Founders Edition) way back in July 2016”
“$279 might be seen as the inflation-adjusted price for such a card after nearly two years, but that sort of talk will not win me any friends in the lovely world of internet comments”
We are seeing a card for approximately the same street price as the GTX 1060 was 2.5 years ago at launch, that gives ~50% more performance. As I said before, this level of performance is getting into solid 1440p and high FPS 1080p range, which is awesome for gamers. This is a great deal, no matter how you slice it. Yes, it would be better if it was cheaper. That can be said for every product, but in the real world, inflation is a thing, and nVidia is not immune to its effects. The other thing that cant be missed is that launch prices almost always drop, and we will probably see these cards hit $249 sometime this year.
IMO I look at it that it
IMO I look at it that it trades blows with the 1070/1070ti for $100 cheaper than the 1070 launched at. 1660Ti also has a lower TDP, so easier to cool and cheaper to power. Seems like a great value to me.
The performance/price ratio
The performance/price ratio looks great but in time of recession, despite economist lies, cash is king and any cent is worth to spare.
As usual, NV$ try to deplete gamer’s pockets with a higher price tag!
I’m still waiting to see any GTX 1660 under 249 USD… as a true GTX 1060 replacement.
Nice review Sebastian, could
Nice review Sebastian, could you please add a power consumption chart?
Thanks
Yes! Need to add that. I’d
Yes! Need to add that. I’d started power testing at 2560×1440 and didn’t want to mix with older 1080p/ultra results since 1440 has higher draw. Will have that up by this evening as I get through the other cards. Ran out of time last night (aka passed out at around 2am).
Any support for variable
Any support for variable frame rate for freesync?
Great question, it should
Great question, it should since the requirement is Pascal and up but I’ll test with a freesync monitor and the latest driver today.
The NoTX Turing Raster Gaming
The NoTX Turing Raster Gaming edition Card has arrived and the RX590 prices will have to fall along with the Vega 56. And the Vega 56 results are conspicuously absent for the benchmarking comparsion charts and that Vega 56 MSRP is reported to have been strategically lowered to $279 by retailers(1).
So there has to be some updates to the benchmarking to include the RX Vega 56’s stock and overclocked results in the testing results because the price delta between the GTX 1660Ti and Vega 56 amounts to $0.
What AMD needs to also begin offering is some lower binned Vega 20 die based variant that lower binned than the Radeon VII. Vega on a 7nm process node can still offer more competition against Nvidia for raster only gaming at the higher end for AMD.
Any price/performance comparsion charts going forward need to reflect the current Vega selling prices and not MSRP.
(1)
“AMD Partners Cut Pricing of Radeon RX Vega 56 to Preempt GeForce GTX 1660 Ti”
https://www.techpowerup.com/252900/amd-partners-cut-pricing-of-radeon-rx-vega-56-to-preempt-geforce-gtx-1660-ti
“…..There is nothing fused
“…..There is nothing fused off here or disabled in software with TU116….”
See for me I was SURE this was going to be a “binning” part, but I’m truly surprised they actively designed/manufactured/distributed this part as part of a planned out road map, very interesting.
Just because the TU116 base
Just because the TU116 base die tapeout’s top binned part is the GTX 1060Ti does not mean that there can not be a lower binned variant derived from some defective TU116 Die samples. And as always as the result of the imperfect diffusion process there will be Defective DIEs no matter the Base die tapeout that is used.
Nvidia will die harvest any TU116 parts for some lower price segement for any TU116 based GTX 1600 series non Ti/lower variants like a GTX 1650/1630 variants that may be used for OEM PCs and such.
Nvidia is smart to release a line of Raster Gaming Only focuesd GTX 1600 series parts for that market segement that can still rely on all gaming titles having alternative code paths in the games and graphics APIs(Vulkan, DX12/DXR, and such) that will also make some Ray Tracing/AI features available via a shader core accelerated software code path for all non RTX branded Nvidia, and AMD/Intel graphics SKUs.
So you can be damn sure that if Intel/AMD do not have any RTX/AI like enabled GPU IP in their hardware that they with work with both Microsoft and the Khronos Group(Vulkan) to make sure there are alternative code paths for Ray Tracing and AI accelerated workloads doen on the GPU’s shader cores on any GPU SKUs that do not have in hardware support with dedicated Ray Tracing and Tensor cores IP. Nvidia will as well just to continue selling Pascal/NoRTX enabled Turing offerings that can still make use of any alternative code paths Graphics/Compute APIs for GPU SKUs that are not RTX enabled.
Think about this! Intel has acquired the FPGA maker Altera so Intel could very well programm an FPGA to do the Ray Tracing/Bounding Volume Hierarchy (BVH) calcuations as well as FPGA implemented Tensor cores. Ditto for AMD working in partnership with Xilinx for some of the very same functionality. Xilinx and AMD have already worked on some Epyc Based HPC platforms that are paired with Alveo U250 accelerator cards(1).
AMD, via Microsoft’s and Sony’s deep pockets, could have the ability make use of some Xilinx FPGA related IP to be integrated along with next generation AMD Console APUs. And AMD and say Xilinx FPGA IP could be interfaced via AMD’s Infinity Fabric(xGMI) IP and that all can be done via AMD’s EESC division woking in partnership with Microsoft and/or Sony and Xilinx/others for FPGA’s programmed to do what AMDs Console APU hardware can not do currently except on the slower shader core/software code path route.
I really wish that AMD and Xilinx would cosy up even more to better compete with Intel’s massive IP portfolio that’s going to include Diecrete GPUs in the 2020/later time frame. AMD’s advantage of having both GPUs and x86 CPUs under its IP umbrella is going to be coming to an end after 2020. So there will be Intel with CPUs, Descrete GPUs/Integrated graphics, and FPGAs/Memory Other PC Technology IP all under Intel’s umbrella.
Nvidia’s CEO has got to be having some extra sleepless nights pondering Intel’s Descrete GPU market entry what with Intel having the FPGA IP to include with its Descrete GPUs to be programmed for tasks like Ray Tracing/BVH and Matrix Math(Tensor Cores) before Raja’s/Intel’s teams Can get a more ASIC like answer to Nvidia’s RTX IP.
Just remember that AMD’s Pro Pender Software/Plugins support simultaneous CPU cores/GPU Shader Cores Ray Tracing accaleration and that will be what the Khronos Group uses for Vulkan with that code base modified to work inside Vulkan’s API vai some AMD extentions or most likely some Vulkan cross platform inclusion in the non GPU maker limited section of the Vulkan graphics API standard. Some Vulkan extentions that were once made for a single maker’s GPU hardware get adopted as non extentions into the full Vulkan csoss platform specification once the rest of the market begins making use of that extention. And that adoption can be GPU hardware based or software code path based.
(1)
“30,000 Images/Second: Xilinx and AMD Claim AI Inferencing Record”
https://www.hpcwire.com/2018/10/03/30000-images-second-xilinx-and-amd-claim-ai-inferencing-record/
It would be interesting to
It would be interesting to know nVidia’s gross margin on these, because the die size of the 1660ti is quite a bit bigger than the 1060 despite the smaller process (16 to 12nm). GDDR6 is also more expensive than GDDR5 which the 1060’s launched with, yet the price is largely the same. At face value it seems like nVidia is accepting lower margins than the 1060 commanded, which is uncharacteristic of them.
Look to all the
Look to all the TU116/Turing/GTX has with improved Shader Core/SM/Cache/other tweaks and the extre shader cores at that. Turing has a different shader core to SM ratio than GP106/Pascal. The Cache subsystems on Turing are improved above Pascal’s cache subsystems. There is so much more new IP on Turing even with the RTX IP excuded and really how hard is it to bring up TechPowerUp’s GPU database in 2 browser tabs, one for the GTX 1060 and one for the GTX 1660Ti! Turing even without the RTX IP is still going to be larger because of the transistor count need to enable all the improvments to Turing’s Micro-Arch even witout any RTX IP included. And did I say more shader cores!
This is the mainstream GPU market where, unlike the Flagship GPU market, sales volume is where more revenues are to be had. It’s not unrealistic because Nvidia knows that the Raster Oriented gaming titles will continue to rule the roost for a few more years. And AMD, and soon, Intel, along with AMD, will be offering some stiff competition. With AMD come Navi time and after that whatever Intel will be offering.
And thus, Nvidia, who only has GPUs and not much else producing the lion’s share of its revenues will really have mainstream GPU competition from 2 other players and not just one. Nvidia has to retain its GTX Raster Gaming line for that mainstream market(Dominated By Raster Oreinted gaming Titles Currently and for some time to come) in addition to its more costly RTX high end branded RTX line of products that Nvidia is hoping(Betting Billions On) to become a newer gaming standard for games that make use of Ray Tracing and AI in the GPU hardware.
RTX(Ray Tracing/AI-Denoining and AI-AA/Upscaling) is going to be a do or die thing for Nvidia because that’s what Nvidia has and will use to differentiate it GPU offerings until AMD and Intel have the time to catch up. Nvidia will spend billions to get as many gaming titles as possible over the next few years RTX Enabled. So GTX/Turing is a stop gap measure for Nvidia to retain mainstream GPU market share that’s currently based on raster performance and not any RTX performance just yet or even within the next year or possibily two.
Nvidia has only GPUs for its bread and butter unlike AMD(CPUs, GPUs) currently or Intel(CPUs, Discrete GPUs/Incoming, and Optane/Memory/TB3) come 2020. I’d expect to see more GTX/Turing(TU116) lower binned SKUs arriving as the Pascal Variants supply channels run dry.
Here are the GTX 1060 specs:
Shading Units: 1280 TMUs: 80 ROPs: 48 SM Count: 10
GPU Name: GP106
GPU Variant: GP106-400-A1
Architecture: Pascal
Foundry: TSMC
Process Size: 16nm
Transistors: 4,400 million
Die Size: 200 mm²
Here are the GTX 1660Ti specs:
Shading Units: 1536 TMUs: 96 ROPs: 48 SM Count: 24
GPU Name: TU116
GPU Variant: TU116-400-A1
Architecture: Turing
Foundry: TSMC
Process Size: 12nm
Transistors: 6,600 million
Die Size: 284 mm²
The more curious thing for me
The more curious thing for me is if there will be a mobile version of this card? This looks very intriguing in a laptop.
The prices in the table are
The prices in the table are plainly wrong mixing Founders Edition with partner models.
How many stock options did nVidia give for this review?
I wish you would add fan
I wish you would add fan noise measurements. I’m explicitly getting a 1660ti because of the low Watts, and therefore hopefully quiet fan noise.
Oh boohoo! Why can’t Nvidia
Oh boohoo! Why can’t Nvidia make a GPU that can run 3 4K screens at 120hz and then give it away for free? They’re rich, they can afford it. But they won’t, and you know why? Because they are EVIL and GREEDY!
Wait a minute.. I understood
Wait a minute.. I understood that the ‘Turing AND ray tracing was only going to be available on the 2080 series..
When did NVIDIA change that stance from as little as 6 months ago? I use NVIDIA cards for creating content using Octane render is the reason I ask…. the 2080s and in ‘Ti’ being well over priced this series, imo, is why I ask.