The rumor mill is churning out additional information on the alleged NVIDIA GTX 1660 Ti graphics card as it gets closer to its purported release date later this month. Based on the same Turing architecture as the already launched RTX series (RTX 2080, RTX 2070, RTX 2060), the GTX 1660 Ti will reportedly use a smaller TU116 GPU (specifically TU116-400-A1) and 6GB of GDDR6 memory on a 192-bit memory bus. Spotted by VideoCardz, TU116 appears to be pin compatible with TU106 (the GPU used in RTX 2060) but the die itself is noticeably smaller suggesting that TU116 is a new GPU rather than a cut down TU106 GPU with hardware purposefully disabled or binned down due manufacturing defects.
A bare MSI GTX 1660 Ti Ventus XS graphics card courtesy VideoCardz.
Rumor has it that the GTX 1660 Ti will feature 1536 CUDA cores, 96 Texture Units, and an unknown number of ROPs (possibly 48 though as the memory bus is the same as RTX 2060 with its 192-bit bus). Clockspeeds will start at 1500 MHz and boost to 1770 MHz. The 6GB of GDDR6 will be clocked at 6000 MHz. VideoCardz showed off an alleged MSI GTX 1660 Ti graphics card with the cooler removed showing off the PCB and components. Interestingly, the PCB has six memory chips on board for the 6GB GDDR6 with spots and traces for two more chips. Don't get your hopes up for an 8GB card however, as it appears that NVIDIA is simply making things easier on AIB partners by using pin compatible GPUs allowing them to reuse boards for the higher end graphics card models for the GTX 1660 Ti. The PCB board number for the GTX 1660 Ti is PG161 and is similar to the board used with RTX 2060 (PG160).
Enthusiasts' favorite twitter leaker TUM_APISAK further stirs the rumor pot with a leaked screenshot showing the benchmark results of a GTX 1660 Ti graphics card in Final Fantasy XV with a 1440p High Quality preset. The GTX 1660 Ti allegedly scored 5,000 points putting it just above the GTX 1070 at 4,955 points and just under the 980 Ti's 5052 score. Compared to the other side, the GTX 1660 Ti appears to sit between a presumably overclocked RX Vega (4876) and a Radeon Vega II (5283).
@TUM_APISAK shows off a FF:XV benchmark run including results from an unspecified GTX 1660 Ti graphics card.
Other performance rumors suggest that the GTX 1660 Ti will offer up 5.44 TFLOPs. RT cores are apparently cut (or disabled) in this GPU, but it is not clear whether or not the Tensor cores are intact (rumors seem to say yes though).
Nvidia GTX 1660 Ti graphics cards based on the TU116 GPU will reportedly start at $279 [update: VideoCardz claims the pricing has been confirmed from information given to reviewers] and may well launch as soon as February 22nd (though they've already missed one rumored launch date on the 15th…). Assuming for a minute the performance factors are true, it is interesting to see the smaller TU116 GPU with fewer CUDA cores at least getting close to GTX 1070 performance. The GTX 1070 uses the 16nm GP104 GPU (7.2B transistors) with 1920 CUDA cores (1506 MHz), 120 texture units, 64 ROPs, and 8GB of memory on a 256-bit bus clocked at 8000 MHz. The GTX 1070 offers up to 5.7 TFLOPS. Looking at the progress over the past few generations, it is neat to see that as architectures improve, they are able to do more work with less (but better/faster) CUDA cores. I would guess that the GTX 1660 Ti will not best the GTX 1070 in all games and situations though as the GTX 1070 does have more ROPs and more total memory (though the GDDR6 memory on GTX 1660 Ti does offer more bandwidth than the 1070's GDDR5 despite the smaller bus). Pricing will be interesting in this regard as the rumored price starts at $279 for GTX 1660 Ti. The cheapest GTX 1070 I found online at time of publication was $300 with most cards going for closer to $330+. We may see price drops on the older GTX 1070 cards as a result. GTX 1060 cards are going for $200+ and RX 580 cards are sitting at $190+, RX 590 at $260+, and Vega 56 prices starting at $330 (and go crazy high heh) so the GTX 1660 Ti may also push down the prices of the highe end and higher priced models of those cards as well.
What are your thoughts on the latest rumors?
Related reading:
“Spotted by VideoCardz, TU116
“Spotted by VideoCardz, TU116 appears to be pin compatible with TU106 (the GPU used in RTX 2060) but the die itself is noticeably smaller suggesting that TU116 is a new GPU rather than a cut down TU106 GPU with hardware purposefully disabled or binned down due manufacturing defects.”
If it’s a physically smaller and Nvidia has given it the new Base die Taprout designation “TU116” then it’s definitly a brand new Base Die Tapeout that using shader cores of the Turing(TU) GPU microarchitecture. It’s lacking the RT cores(?) and Nvidia will have to retain the Tensor Cores IP if it wishes its DLSS IP to be usable on TU116. That TU116-400-A1 designation may indicate that this is still not the full TU116 die as that may have more Shader cores and Tensor cores for better yields or even for some future higher binned variant based of of the TU116 base die tapeout.
“appears to sit between a presumably overclocked RX Vega (4876) and a Radeon Vega II (5283).”
RX Vega 56/64(?) and what is this “Radeon Vega II”?
I’d rather see Pixel fill rates and Texel rates along with any FP TFlops rates.
How much of the die space is due to the 12nm process node on TU116 and or less Shader cores, no RT cores(?), and reduced Tensor Cores.
And having no RT cores or reduced Tensor cores should not require any die/pinout adjustments because that RTX IP is really only on core IP anyways that performs/consumes no off die transfers/resources directly anyways. Maybe Nvidia has also no NVLink IP at all included on TU116 also and that’s saving more die space. TU116’s pinout May still be a subset of what is required for any larger Turing Die but that just means less active/used pins/pads/bumps on the die but the pin/pad/bump layout remains the same for all the pins that matter. That’s just the most economical way for any GPU maker to retain pinout compatability for itself and its AIB partners.
12nm is the same process as
12nm is the same process as 16nm. the max reticle limit is increased only. so you can make larger dies but there is no die shrink at all.
im pretty sure amd’s 12nm is the same thing. no shrink. they just added higher leakage transistors for higher clocks and called it 12nm just like intel did with 14nm++ (but intel didnt lie about it).
intel doesnt have to market its fab tech fabless customers so they just tell the truth (but with all the screaming people are doing about 10nm i would just go ahead and lie like everybody else did).
even amd’s current ryzen 2000 chips are really 20nm. the back end is supposedly 14nm (just contacs and vias etc) but the logic is all 20nm. but they call it 12nm.
its because 20nm planar was no better than 28nm. thats why everybody was stuck on 28nm for so long. wasnt worth upgrading from 28nm. then they added finfets to 20nm. and it was so good that they decided they should call it 14nm. then they did it again with 12nm and high leakage gates. if you dont believe me look at the die sizes of 14nm and 12nm chips from amd. they are the same size. they said they put dark silicon between sections but its bull. if they had any shrink at all it was 5% tops. maybe they found a way to pack cache cells closer. but i doubt even that.
of course thats glofo’s 12nm but tsmc’s 12nm is a similar case with the only real change being that they could make larger dies on it.
never believe a fabs own marketing…or even naming for that matter
“14nm and 12nm chips from
“14nm and 12nm chips from amd. they are the same size”
It’s still not the exact same process just the same BEOL metal layer pitch between GF’s 12nm process and the GF(Licensed from Samsung) 14nm process to ease GF’s customer transition to its newer improved 12nm node. GF also offered an option on its 12nm process of a denser 7.5T library cell size of 10 fins per cell as opposed to the 14nm process that used 9.5T design libraries with larger 12 fin cell sizes. So clients could stay at their same 9.5T tapeout or opt for the denser 7.5T libraries for more transistors per mm^2.
GF also tweaked the Transistor design on that 12nm node for petter power/leakage and better switching speed metrics. So yes the DIE sizes may have remained the same if AMD simply chose to directly move over without changing from the less dense 9.5T libraries of they could have opted for 7.5T libraries and got a little more transistors packed in for Zen+ CPUs and APUs for things like more robust error correction and other tweaks to any functional blocks that are on the average AMD APU like video connection fabric IP and encoding/decoding, Other IP.
The Raven Ridge Picasso APUs are on Zen+ at GF/12nm and there is plenty of tweaking that was done that involved extra circuitry for cache subsystems even if AMD did not add any more CPU cores, or Shader cores on the integrated Vega graphics on Raven Ridge/Picasso relative to fist generation Zen RR/14nm.
So the same for Nvidia and TSMC’s 12nm node over TSMC’s 16nm node as far as layout libraries and cell sizes(Fins per cell) options as well as any individual transistor design and diffusion/doping changes at that 12nm/TSMC process node used for Turing.
The Wikichip Fuse article(1) is a must read if you want to learn about how density can be affected by the use of denser design layout libraries even at close to, or the same, process node sizes. Going to denser librarise(Less fins per cell) allows for more transistors to be packed in per mm^2 but reduces the ability to create more 4 fin transistors compared to 3, 2 fin transistors per cell. So if you want to have higher clocks/more drive current you need 4 fin transistors and less transistor density compared to 3 or 2 fin transistor designs. So it’s 9.5T libraries and more fins per cell for the 4 fins transistors and higher clocks or 7.5T libaries and less fins and more transistors per mm^2 but lower clocks because of more 3 or 2 fin transistors that can not accept higher drive currents or switching speeds.
That BEOL(Back End Of Line) metal pitch matching that GF did with its 12nm node having the same BEOL metrics as the 14nm/Samsung node was done for reasons around easing any of GF’s(Samsung process) 14nm customers’ transition to GF’s in house(No Licensing fees to Samsung) 12nm node. And read the Wikichip Fuse article to see how GF tweaked their base transistor design at 12nm compared to the 14nm/Samsumg process that GF licensed.
(1)
“VLSI 2018: GlobalFoundries 12nm Leading-Performance, 12LP”
https://fuse.wikichip.org/news/1497/vlsi-2018-globalfoundries-12nm-leading-performance-12lp/
Be cheap and Turing will go
Be cheap and Turing will go from lame to nice
This is a pointless
This is a pointless overpriced card. If it only reaches 1070 performance but has less ram then why bother? It would be better to grab a 1070 with 8 gb of ram for $30 more.
Seems like everyone is
Seems like everyone is forgetting the 1060 came out on July 16, 2016, 2.5 years ago for $250. So a 33% performance jump 2.5 years later at a 12% premium.
This is an obvious cash grab. The article even points out that they can do less with more. They build for cheaper and you pay for more. Completely unacceptable.
If you need a videocard, hit up ebay as the cryptodreamers are selling at discounted rates.
An item that was purchased in
An item that was purchased in 2016 for $250 would cost in 2019 $262.19 factoring in the Cumulative rate of inflation of 4.9%.
So that’s $279 – $262(rounded down) equals $17 more after inflation and that’s the over all inflation rate and not the rate of wage increeas of the GPU, Software and Driver engineers or the specific materials/fabrication costs that may have risen faster than the overall inflation rate for those years.
It’s a 12nm process for Turing and the die size is smaller so Nvidia is getting more DIEs/Wafer with TU116.
So that’s an MSRP of $279(GPU Die and all the other VRMs, other thingies) and engineering wages have been going up faster than the rate of inflation compared to other workers due to the economy/labor market.
I’ll bet that Nvidia will not keep Intel/Apple from poaching engineering talent, of all sorts, if those GPU software/hardware engineers are not compensated more handsomely what with Intel’s and Apple’s deep pockets looking for that specific GPU talent, CPU talent also. Just look at AMD with Raja and other AMD folks that are now working at Intel.
Add to that the tariffs and that trade war and maybe that $279 is not looking so bad. And really Nvidia is selling GPU DIE’s and not much else even on the Cards that Nvidia sells diretly because Nvidia is using a AIB/Subcontractor on its in-houe GPU Card SKUs also.
12nm is the same process as
12nm is the same process as 16nm. its just got a larger reticle limit for larger dies. but otherwise im kinda with you. with no process shrink to fuel cheaperdenser cards this isnt a horrible deal. when 7nm hits we’ll see something better i hope.
I’m afraid reducing
I’m afraid reducing significantly the transistor footprint under 20 nm is no more relevant for silicon in order to improve the power efficiency or the BOM.
The semiconductor industry reached a dead-end and engineers didn’t envision the future for the next 10 years.
The semiconductor industry is dead with almost 80 % of old engineers ready to retire.
Wow you are in your usual
Wow you are in your usual uneducated form with that wording chipman of the unwashed masses.
“80 % of old engineers ready” And that’s stating the obvious as any worker that is old contemplates retirement. But I’m sure that most older engineering workers would rather continue working but are forced into retirement. Now blue collar workers, well that’s more related to aging and the ability to do any damanding physical labor.
Moore’s law was an economic observation more than it has to do with the laws of physics and continued process node shrinks becoming more difficult. The cost of those transistors are going up now and not decreasing beacuse in order to get any smaller the chip fab costs have more than doubled, it’s even worse for the new EUV machines’ costs.
So that easy transistor doubling every 18-24 months is gone if one is using a single monolithic die and requiring ever smaller process nodes and fabs that cost ever increasing more billions of dollars to build and equip.
Chiplet based processor designs are a solution and many more will be going to chiplets but there are networking issues to solve if the chiplet counts become too high and tax the interconnect fabrics’ topologies IP that’s still not fully developed currently. 3D die stacking and other IP implies that the Chip Industry is not so dead yet compared to chipman’s single cell of grey(that’s much closer to shorting out) as it tries to grasp reality.
Misaligned butter doughnut topologies and other engineering efforts for EMIB and other Die stacking interpoer IP proves that the engineers are still very engaged in looking at the future.
It’s more of some larger interests’ Monopoly Market tactics made use of the chip companies’ non engineering management and such that have been holding back progress. And that’s been keeping that new IP on the shelves and not in any products in order to milk the current technology over a longer time period for profits rather than progress.
Enforced fair market Competition will fix that but chipman would blow a fuse if the Fair market competition regulations already on the books started to be properly enforced. Monetary might makes right in chipman’s view of the world more so than that fair markets/fair competition drive innovation reality stuff that chipman so despises.
” Now blue collar workers,
” Now blue collar workers, well that’s more related to aging and the ability to do any damanding physical labor.”
No, retirement is not only a question of strenght for blue collars but also of brain capacity specially for white collars!
“80 % of old engineers” are
“80 % of old engineers” are still made up of 100% old people that are engineers(That are OLD!) and your comment on “brain capacity” has me ROFLOL when taken in the context of your statments.
You are so daft, chipman, that you can NOT even spot your own gaffes after they have been circled with a red pen!
Here is some proper research and the Semiconductor Industry Association (SIA) report ” 83 Fed. Reg. 32842 (July 16, 2018) Submitted August 15, 2018″ states:
“Another challenge is the “greying” of the workforce. One SIA member, in comparing their workforce statistics across the globe, identified that their U.S. workforce has an
average age of 48-50, while sites in the Asia Pacific region have significantly younger workforces in their late 20s and early 30s on average. Figure 3 shows the older age
distribution of workers in the Electronic Component and Product Manufacturing category of the U.S. Bureau of Labor Statistics data, a broad category that includes the
semiconductor industry (orange hues), compared to the total workforce of the U.S. (blue hues), for the years 2011-2017. As the overall workforce has skewed to a slightly
younger age distribution over the past six years, the semiconductor sector has seen a nearly 10 percent decrease in the share of workers ages 35-44 and significant
increases in older age groups. Accordingly, as older workers begin to end their careers, the semiconductor industry in the U.S. faces the challenge of attracting and retaining younger workers with the necessary skills.” (1)
So in the US the enginnerimg workforce is getting older on average but in the Asia Pacific reagon it’s late 20s and early 30s on average, but really the older engineers are more valuable and the real problem in the US is not enough US folks wanting to persue STEM training and specifically semiconductor engineer traning. So currently the 35-44 age group of semiconductor engineers in the US that is shrinking the fastest and as they reach the 45 and older range there are not enough new yonger trained semiconductor engineers to fill the Vacuum!
The Real problem is that if the US can not get the trained semiconductor engineering talent here then it has to modify its immegration polcy with respect to semiconductor engineering talent!
So this statment from the same report:
“The highest priority change for the U.S. federal government is to stimulate the supply of qualified workers for the semiconductor industry in the near term by swiftly reforming our high-skilled immigration system to allow STEM graduates of U.S. institutions to remain in and work in the U.S. The best-and-brightest students from around the world are attracted to our world-class universities, but once they have their diplomas, current U.S. immigration policy makes it almost impossible for these educated professionals to work, live, and contribute to the American economy. There is bipartisan support for reforming current green card policies for highly skilled immigrants, and strong
government leadership is needed to make progress on this issue. The government should act swiftly to end per-country green card caps and exempt advanced STEM degree graduates of U.S. universities from existing green card caps.
From an immigration perspective, one way to increase the number of U.S. workers is to accelerate the permanent residency process for those that qualify for highly skilled
immigrant visa categories (National Interest, Extraordinary Ability, Outstanding Researchers, etc.) through targeted immigrations reforms such as eliminating the percountry
limit on immigrant visas coupled with recapturing unused immigrant visas from prior fiscal years. By speeding up the transition to permanent residency, highly skilled
workers can switch from limited mobility work visas, such as the H-1B, and enter the unrestricted labor market as US workers.” (1)
And there you have it stated in that report that the US is so full of Jonny Chipmans without the skillsets or the motivation to want to learn any STEM related subjects in College. But really the US has always had to import the best brains from around the world and most of the engineering/science grad students in US grad/post programs are Not US citizens. And the report states that fact. It’s all in that report that’s been published in the 83 Fed. Reg. 32842
Hey chipman what does this “83 Fed. Reg. 32842” in that report’s title refer to? And what kind of content gets presented to what Body/Bodies and everything that gets before that Body/Bodies, and that Body’s/Bodies’ overseen agencies, has to be published in that “Fed. Reg.”! Come on chipman the Daft, Think about it Think about it! And where the hell are your research skills and oh brother, chipman, you have 0% inductive/deductive reasoning skills.
(1)
“Semiconductor Industry Association (SIA)
Comments to the National Institute of Standards and Technology on “Current and Future Workforce Needs to Support a Strong Domestic Semiconductor Industry”
83 Fed. Reg. 32842 (July 16, 2018) Submitted August 15, 2018”
https://www.semiconductors.org/wp-content/uploads/2018/11/NIST-workforce-RFI-august-2018.pdf
The formatting was fine when
The formatting was fine when I submitted it so what’s up with that. This happens when trying to cut and paste from a PDF, and then I fix the botched formatting but still it gets screwed up again.
Maybe enable a Right Click “Paste Unformatted” option in your Textbox class in your Drupal CMS based website’s forum comment section! Or maybe that’s the Drupal folks that have the responsibility as they are obviously overriding and/or modifying the textbox base class inherited methods with some of their own Drupal overrides.
Man that Drupal and the Browser to MS’s base class code base is so full of bugs on Windows(All Versions)! Where did all thsoe MS QA/QC folks go once Satya(the Cloud) Nadella took the reins from that stage stomping profusely sweating Ape. There are no compentent QA/QC filks to test the Developers’ Developers’ Developers’ Developers’ Developers’ crappy coding!
“The Real problem is that if
“The Real problem is that if the US can not get the trained semiconductor engineering talent here then it has to modify its immegration polcy with respect to semiconductor engineering talent!”
That’s a total leftist BS!
The US need to kick the golden asses of old universities professors to improve the education quality of its young american engineers instead of importing subengineers from asia which lead to technology thefts specially from fake students but true spies.
It’s not that simple if you
It’s not that simple if you take the time to actually Read and learn! So see the above reply to your other post and take the time to read Wikickip dot org’s Fuse magazine articles as that is what Wikichip dot org is all about! That an their online CPU/GPU and related IP documentation efforts. Read the proper academic and professional trade journals also and see what it actually involves.
Folks that do not read and learn about High Technology can not even expect to be able to discuss any High Technology related supjact matter such as GPUs, CPUs and Fab Process nodes! And process node Technology is a field all unto itself that requires even more reading than usual.
You need to learn about FEOL and BEOL and diffusion/doping regements and transistor geometry and gate design. Finfets and all of what came before as well as what is incoming like Gate All Around and other research into what is going to replace the current IP. Automated layout libraries and fin pith, gate pitch and 7.5T and 9.5T and any other #.#T’s options/metrics and tradeoffs that are made for speed as opposed to density. That Includes AMD’s Definition of TDP and other’s definition of TDP heat dissipation metrics as opposed to power usage metrics.
Also The GPU Shader core’s pipline/s depth is also a factor in how high a GPU’s shader cores can be clocked, ditto for a CPU’s core eexecution resources. Deeper piplines increase IPC but are a pain if there are incorrect branch predictions made and the piplines contents have to be discarded. Longer piplines take more cycles to be fully utilized and if they have to be flushed they cost more in wasted cycles.
Smaller Die sizes with denser transistor packing will get hotter per unit area than larger Die sizes with less transistors per mm^2. And power usage metrics is based on more factors than just clock speeds.
Edit: fin pith
to: fin pitch
Edit: fin pith
to: fin pitch
Still no TDP announced…
Still no TDP announced… which expose the physical wall hit by foundries.
It seems the Silly-cost Valley is pretty DEAD by now!