You've heard this one before, though not from Jen-Hsun Huang of NVIDIA who has a vested interest in seeing Moore's Law finally be relegated to computing history. NVIDIA is pushing GPUs as a better alternative to CPUs for a variety of heavy computational lifting. Volta has been adopted by many large companies and he also just announced TensorRT3 a programmable inference accelerator with applications in self-driving cars, robotics and numerous other tasks previously best done with a CPU. DigiTimes quotes Jen-Hsun as saying "while number of CPU transistors has grown at an annual pace of 50%, the CPU performance has advanced by only 10%", more or less accurate in broad strokes but certainly not a death rattle yet.
Intel has a different opinion of course, reporting Moore's Law to be perfectly healthy just last Tuesday.
"Nvidia founder and CEO Jensen Huang has said that with the emergence of GPU computing following the decline of the CPU era, Moore's Law has come to an end, stressing that his company's GPU-centered ecosystem has won support from China's top-five AI (artificial intelligence) players."
Here is some more Tech News from around the web:
- Guru3D Rig of the Month – September 2017
- Docs ran a simulation of what would happen if really nasty malware hit a city's hospitals. RIP 🙁 @ The Register
- Deloitte is a sitting duck: Key systems with RDP open, VPN and proxy 'login details leaked' @ The Register
- watchOS 4 breathes new life into fitness side of the Apple Watch @ Ars Technica
- iPhone X vs Galaxy Note 8 specs comparison @ The Inquirer
- EWin Racing Champion Series Gaming Chair Review @ NikKTech
The number of transistors is
The number of transistors is going to be stalling out also because of physical limitations on transistor size. That obviously effects GPUs. We will need to go with multi-chip GPU solutions to overcome this. Giant monolithic GPUs (like Intel’s giant monolithic CPUs) are not going to be cost effective at small process nodes compared to combining multiple smaller chips.
Moore’s so called law was
Moore’s so called law was more of an economic observation about the economic costs that resulted in the doubling of the number of transistors every 18-24 months on ICs. So that cost equation has not been there for some time now and things are going to become very expensive going forward for the economic costs of doubling up on transistors and using smaller and smaller processor nodes as the only method to get that doubling.
Those new nodes are going to have to be around longer and more “Half-Node” with less space-area/less power-leakage benifits are going to happen like GF’s 14nm to 12nm process Half-Node improvments where smaller incermental increases in area density and electrical/leakage inprovments will happen over longe periods of time.
We are running out of atoms as node sizes get smaller and EUV will help but EUV is costly so the economics of Moores “Law” do not add up any more. Where once more and more things were being integrated onto a single monolithic die now things are being made modular and spread across many Modualar Die units on MCMs and Interposers. So the Die/Wafer yields can go up and it can make more economic sense to continue with multi-die processors for CPUs and GPUs made up of smaller modular dies to scale things that way.
Things are going to have to go more 2.5D(Multi-dies on MCM/Interposers) and 3D(die stacks) and 2.5D-3D combinations(GPUs/HBM2 stacks and CPU die/s, GPU die/s , and HBM2 stacks) so the economics of Moores’ law can somehow be attempted once again over multi-die computing systems.
This will Include more GPUs coming to market making use of many smaller Modular GPU dies or even larger GPU dies in pairs on one PCIe cards wired up with Infinity Fabric/NVLink and made to appear to the OS/software as one larger logical GPU as far as load balancing is concerned across GPU dies.
CF/SLI will give way to DX12/Vulkan graphics API managed multi-GPU load balancing in the short term while in the long term the Infinity Fabric/NVLink/Other IP will be used to totally abstract any multi-GPU die configurations in the hardware/connection fabric and make modular GPU die scaling easy and transparent to any software/driver/API solutions.
The only thing holding back some of the DX12/Vulkan API based solutions across any makers GPU’s that may be slotted in/plugged into a computing platform is the problem of proprietary self intrest among some GPU/OS market interests! But that proprietary self intrest is not going to be able to be maintained too much longer as the Newer Graphics APIs will have that ability made use of by the games industry as a whole and Vulkan is the new Graphics API with the larger installed base on mobile devices and will be even on PCs’ if the gaming indistry has its way over the OS(DirectX limited to windows 10, Xbox)/some GPU hardware maker’s proprietary self intrest(CUDA lock-in, GSync/$$$$$).
The parrot sketch would be
The parrot sketch would be more apt when it comes to Intel.
How about the Black Knight
How about the Black Knight from the film Monty Python and the Holy Grail.
Wikipedia has a brief rundown on the Plot or you can watch it on YouTube – I’d post Links but you know how that would turn out.
Same movie, mate. Not a bad
Same movie, mate. Not a bad alternate though.
I see the claim about 50%
I see the claim about 50% transistors and 10% performance. By what metric? Transistor growth is primarily focusing towards more cores and that DOES scale performance pretty much linearly, provided that application is threaded and runs calculations in parallel.
Huang is comparing this with GPUs, that ONLY run calculations effectively in parallel on a much more massive scale.
Aren’t GPUs suffering from
Aren’t GPUs suffering from the same problem? I have a 4-year old GPU and still don’t feel any need to upgrade.
Moore’s law is the
Moore’s law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years.
In a March 30th 2015 interview with Rachel Courtland he said: “We won’t have the rate of progress that we’ve had over the last few decades. I think that’s inevitable with any technology; it eventually saturates out. I guess I see Moore’s law dying here in the next decade or so, but that’s not surprising.” – that doesn’t alter his original observation or add other factors to it, he only estimates that a time will come when it no longer applies.
The cost or effectiveness of those transistors, or even their shape, are not factors; only the density of ‘objects’ performing the function of a transistor.
If new shapes or materials can be crafted that are smaller yet provide an identical function to a transistor then there might be a further extension to his observation’s lifespan – the difficulty, cost, clock speed, temperature, etc. are not factors; only that the function of a transistor is provided in half the space every two years.
If they could make a molecule that functioned as a transistor we’d leap forward many years and be without foreseeable improvement permanently. Growing a crystal that functioned as an enormous memory chip might seem a means to it but if it’s structure did not function like many transistors (but instead relied upon another principle) then that wouldn’t fit Moore’s Law despite it’s usefulness or whatever improvement it might be over ‘transistor technology’.
“He continued that while the
“He continued that while the number of CPU transistors has grown at an annual pace of 50%, the CPU performance has advanced by only 10%”
Translation: Moore’s Law is still in effect (i.e. transistor doubling every two years), but something that isn’t Moore’s Law isn’t in effect.
“adding that designers can hardly work out more advanced parallel instruction set architectures for CPU and therefore GPU will soon replace CPU.”
Translation: Parallelism is hard, so extreme parallelism must be easy, right guys?