Digitimes is reporting on statements that were allegedly made by TSMC co-CEO, Mark Liu. We are currently seeing 16nm parts come out of the foundry, which is expected to be used in the next generation of GPUs, replacing the long-running 28nm node that launched with the GeForce GTX 680. (It's still unannounced whether AMD and NVIDIA will use 14nm FinFET from Samsung or GlobalFoundries, or 16nm FinFET from TSMC.)
Update (Jan 20th, @4pm EST): Couple minor corrections. Radeon HD 7970 launched at 28nm first by a couple of months. I just remember NVIDIA getting swamped in delays because it was a new node, so that's probably why I thought of the GTX 680. Also, AMD announced during CES that they will use GlobalFoundries to fab their upcoming GPUs, which I apparently missed. We suspect that NVIDIA will use TSMC, and have assumed that for a while, but it hasn't been officially announced yet (if ever).
According to their projections, which (again) are filtered through Digitimes, the foundry expects to have 7nm in the first half of 2018. They also expect to introduce extreme ultraviolet (EUV) lithography methods with 5nm in 2020. Given that Silicon in a solid has a lattice spacing of ~0.54nm at room temperature, 7nm transistors will consist of about 13 atoms, and 5nm transistors will have features containing about 9 atoms.
We continue the march toward the end of silicon lithography.
Even if the statement is correct, much can happen between then and now. It wouldn't be the first time that I've seen a major foundry believe that a node would be available, but end up having it delayed. I wouldn't hold my breath, but I might cross my fingers if my hands were free.
At the very least, we can assume that TSMC's roadmap is 16nm, 10nm, 7nm, and then 5nm.
The number of people
The number of people predicting the end to Moore’s Law doubles roughly every 18 months.
I’m waiting for us to shift
I'm waiting for us to shift to a new compute medium…
Does time count as a
Does time count as a medium?
If we can just miniturize time travel technology, get the electrons to jump back in time, dump them into the input registers for the next stage in the pipeline, preload that stage with the proper micro-operation (solving the conditional logic branching problem with the usual speculative computation is an implementation detail, my money is on Transmeta, they were always clever) – BOOM – infinite compute.
That’s how that would work, right?
That’s all folks! We are
That’s all folks! We are running out of available silicon/other atom numbers per circuit with which to make transistors smaller in the planar dimension, at only single digits of atoms per circuit funny quantum things begin to happen that are not as apparent when circuits where made up of more atoms. And even though the smallest circuit available may lead to power savings, the number of substrate atoms surrounding the circuit will have to remain larger to support the heat transference and support the circuit. So the amount of available space savings for more circuits per unit area will not be as great owing to the fact that there will have to be enough substrate atoms to support the 5nm circuit’s heat transference and actual physical support. The circuit size will get smaller but the circuit pitch/spacing between circuits will have to remain larger relative to the 5nm circuits size or the circuits will evaporate themselves for lack of heat transference capabilities provided for with proper numbers substrate atoms.
Lack of proper heat transference will lead to more inefficiencies magnified as the circuits get smaller if the heat energy is allowed to interfere with the circuits switching ability, and more costly cooling solutions will have to be employed to keep the 5nm circuits operating error free. At 14nm just about all of the low hanging fruit for optimal circuit shrinking for the planar silicon based processes have been picked, and other substrate atoms will have to be utilized that can stand higher operating temperatures. The only way to go now is stacking, and stacking has its own special heat problems that have to be solved for non memory based processing circuits. Those fins in the finfet circuits will have to get taller, or they will have to go gate all around but when things get down to single digit numbers of atoms’ width per circuit the entire transistor circuit will have to be designed to cope with the quantum nature of small numbers of atoms, and things can become actually neither here nor there with respect to just whether the electron is held back by the circuit or has tunneled through, even harder to prevent with all that thermal shaking going on.
http://spectrum.ieee.org/semi
http://spectrum.ieee.org/semiconductors/devices/the-status-of-moores-law-its-complicated
http://www.hpcwire.com/2016/01/11/moores-law-not-dead-and-intels-use-of-hpc-to-keep-it-that-way/
Even Intel had problems with
Even Intel had problems with its 14nm process, just wait for that 5nm process and the problems associated with EUV lithography. Moore’s law/observation has more to do with the economics of going smaller than the physics of going smaller, so expect any high powered 5nm process for CPU cores to have even more problems, and these problems are going to be even more costly to fix, including having to go lower with any clock speeds or circuit densities relative to the smallest circuit size! Moore’s law/observation will have to give up on the planer dimension and go with 3D circuit stacking or die stacking with more circuits running at lower clock speeds just to be able to manage all the heat generated. Heat transference and leakage are going to make things harder for any stacked 3d circuits made with the 5nm process.
The first article you linked to is from 2013, and before even the 14nm problems were revealed. The second article clearly states:
“Gordon Moore’s 1965 article on the economics driving the increase of semiconductor functionality has turned out to be wildly prophetic in terms of the effect on transistor scaling”
So it’s the costs that will make for Moore’s law hitting the economic wall of going lower on the planer process, before the laws of physics makes going smaller in the planer dimension impossible. Going into chip stacking and gate all around will probably be the only way to go to increase the amount of circuits on a die, but that will done by going up into the Z axis for circuit design, or simply stacking the dies to allow more circuits per cubic area, with doing things by the square area(planer) retired for good. This is already started with 3D NAND and HBM die stacking, and will just be beginning for processor die stacking with more complicated problems to deal with for dealing processor die stacking to be solved.
edit: for dealing
edit: for dealing processor
to: for processor
I agree. I just thought those
I agree. I just thought those were interesting reads along these lines.
Especially how the node name means very little.
Going smaller currently is definitely a mixed bag too. The overall system architecture is currently the bottleneck.
Exascale computers are going to change the architecture of supercomputers significantly. The technologies used there(3D memory, silicon photonics, better interconnects, XPoint etc.) will eventually be able to be put in smaller systems like PCs and theyd make a lot more of a difference than smaller transistors.
“… (It’s still unannounced
“… (It’s still unannounced whether AMD and NVIDIA will use 14nm FinFET from Samsung or GlobalFoundries, or 16nm FinFET from TSMC.) …”
Scott, It has been announced that AMD is going with GloFo for 14nm:
http://thefoundryfiles.com/2016/01/07/globalfoundries-to-build-new-14nm-gpus-for-amd/
And Nvidia will be using 16nm from TSMC:
http://www.digitaltrends.com/computing/tsmc-will-build-nvidias-new-gpu-on-a-16-nm-finfet-process/
GloFo’s process for 14nm is
GloFo’s process for 14nm is licensed from Samsung!
Ah thanks. Must have missed
Ah thanks. Must have missed the AMD/GloFo news during the CES blitz. We're pretty sure that NVIDIA will use TSMC, but I still consider reports like that to be unofficial.
” replacing the long-running
” replacing the long-running 28nm node that launched with the GeForce GTX 680.”
Radeon HD 79×0 preceded GTX 680 a few months.
Yeah, that’s also true. I
Yeah, that's also true. I just remember the 680 shortages so it was first to my mind. I'll update these.
I doubt we will actually see
I doubt we will actually see a these small process sizes come to market, unless they are even more marketing names than they already are. It is amazing that they can make so called 14 nm parts with 193 nm wavelength light. I have not heard anything good about yields on 16 nm and smaller nodes. Even if they can make these devices with sufficient yield to be economical, the life span of these devices may be a concern. With that small a number of atoms, it wouldn’t take that many electrons knocking atoms out of place to cause issues.
There are other things which can be improved about a process without making the feature size smaller. A big target wil be interconnect. The metal interconnect layers are consuming a significant portion of the power for modern, high performance chips. I have seen some strange things being done to reduce resistance or otherwise improve performance of the interconnect. This will be slow going to some extent since software models need to take into account these new features.
The chip stacking like the silicon interposer technology will help by allowing the use of smaller die to counter lower yields. Multiple gpu die can be used with high speed connection between them. For CPUs, they can place multiple processor die and a separate cache die in addition to other components.
Those people at TSMC are
Those people at TSMC are increasingly detached form reality. Even Intel, that is far superior to TSMC or any other foundry, plans to introduce 10 nm only in late 2017, and even then only on small die chips to manage low yields.
If TSMC claims they will beat Intel and have 7 nm products on the market only some half year after Intel pushes out their first 10 nm products they are either insane or lairs.