Intel discreetly released a tidbit of information on a new project they are undertaking, a GPU specifically for HPC which will compete with AMD and NVIDIA's current offerings. We do not know much, The Inquirer was able to ferret out that this will be a two chip solution, with a GPU and FPGA for optimization. The chips will be fabbed on a 14nm process and contain 1.542 billion transistors, significantly lower than either AMD or NVIDIA's current cards; an interesting fact which we do not know what effect it will have on performance. Drop by to see if you can glean any more info here.
"The chip maker showcased a prototype design for an in-house graphics acceleration unit based on a 14-nanometre process at the excitingly named IEE International Solid-State Circuits Conference in San Francisco, reported PC Watch."
Here is some more Tech News from around the web:
- Silicon qubits show promise for quantum computers @ Nanotechweb
- Chrome 64 Now Trims Messy Links When You Share Them @ Slashdot
- Iiyama reanimates LCD cartel lawsuit corpse, swings it at Samsung @ The Register
- A Problem With Jaxx Cryptocurrency Wallet Security @ Techgage
- AMD's Raven Ridge Botchy Linux Support Appears Worse With Some Motherboards/BIOS @ Phoronix
- Deconstructing A Simple Op-Amp @ Hack a Day
They better be putting some
They better be putting some Tensor Processor Cores/units in there either on the GPU or the FPGA as Nvidia’s Volta based Tasla SKUs have loads of Tensor Cores to work those heavy inferencing and other AI workloads. If that “GPU” lacks the usual GPU ROP/TMU/orher graphics ralated hardware units it would be more likely that this could be called a Vector Processing Unit or something else. But if those graphics oriented units are there then it should be able to be used for graphics also. My money is on the not actually a full GPU design with only the massively parallel cores(RISC ISA in nature like other GPUs’ cores) remaining and hopefully some Tensor Cores also.
AMD needs to be working up some Tensor Core based competition of its own to go on it’s Epyc based platform alongside or on die with of any Radeon Instinct SKUs for the AI market. Getting those neural nets trained takes lots of computing power but once that training is done the trained neural net can be run from a smart-phone with smart-phones getting AI processors included on the MB/Package now alongside the DSPs and other specilized units.
On paper everyone can build a
On paper everyone can build a gpu. Even me.
No way. Just imagine how long
No way. Just imagine how long it would take you to draw 1.52 billion transistors on a piece of paper.
That’s just the FEOL(Front
That’s just the FEOL(Front End of Line) transistor layers and then there is all that BEOL(Back End Of Line) metal connection traces layers. And each layer has its own sets of masks depending on if it’s double, or more, patterned etching that needs to be done. So that’s a lot of very larger format paper(not really paper) indeed.
So now their is verilog tools and other automated layout tools and things do not have to be done the old way(1,2).
Great website(computerhistory) by the way and well worth a bookmark.
“1955: Photolithography Techniques Are Used to Make Silicon Devices” [See 3rd and 4th images in the slide presentation]
“1960: First Planar Integrated Circuit is Fabricated”
[See slides 2 and 4]
What’s the hash rate…
What’s the hash rate… ahahaha hahahaha i’m so funny 🙂
can it run meltdown?
can it run meltdown?
ok, maybe not funnier, but just as bad
Anything that increases
Anything that increases supply capacity and competition in the GPU market is good news for beleaguered consumers.
My guess is after the mining
My guess is after the mining boom Intel actually wants a piece of the pie. and with only two vendors in the market it’s always good to have more competition, if anything Intel is to offer and its even only good for mining in comparison to gaming, it will ease some pressure off us gamer.