The Register has obtained a slide describing the next families of Xeon processor to be released by Intel, the Purley platform which includes Skylake. There are some interesting new developments, including on die interface for either 10Gb/sec Ethernet or 100Gb/sec Omni-Path fabrics which interested the participants at the HPC conference the slides were shown at. They also mentioned a brand new memory architecture which is described as offering four times the capacity and 500 times the speed than current NAND, all at a lower price per chip which is likely to be somewhat of an exaggeration on their part. There were also new Phi chips, including the long awaited Knights Landing and workstation chips for use outside the server room.
"A presentation given at a conference on high-performance computing (HPC) in Poland earlier this month appears to have yielded new insight into Intel's Xeon server chip roadmap.
A set of slides spotted by our sister site The Platform indicates that Chipzilla is moving toward a new server platform called "Purley" that will debut in 2017 or later."
Here is some more Tech News from around the web:
- Samsung to create industry giant via mega merger with itself @ The Register
- Red Hat Fedora 22 leaves beta to become a Vagrant @ The Inquirer
- There's a Moose loose aboot this hoose: Linux worm hijacks Twitter feeds for spam slinging @ The Register
- A Text Message Can Crash An iPhone and Force It To Reboot @ Slashdot
Interesting… Is there any
Interesting… Is there any more info on their new line of workstation Xeon chips?
If you set up flash to be
If you set up flash to be accessed more like DRAM I could easily see a massive speed boost. There still may be block erase considerations though. For server situations, they often just need random access to large amounts of data, but writes to that data may be infrequent. I could see a server set up with a large amount of fast nvram with the nvram acting more like a RAM disk. You would still have DRAM for caching frequently used items and scratch space. If they are going this route, then stacking the flash die using TSVs would make a lot of sense. The hybrid memory cube architecture might be usable for accessing both.
This type of architecture would not be that useful for consumer systems since most applications are specifically set up to not need random access to such large amounts of data. This may be changing considering how big games are getting with load times. We are going to be getting a big boost in DRAM capacity though. It will be interesting if some of these new technologies make things possible which were previously not possible. I have wondered if ray tracing will be plausible with HBM. It should have relatively good random access characteristics, but I don’t know if it would be sufficient.
They are talking about a
They are talking about a device with 6 channels of memory but it is unclear whether this is DDR4 or the new HMC architecture. Six Chanel’s of DDR4 would be a 384-bit memory interface. This is very wide for a socketed processor. Perhaps the slide is referring to an HMC architecture? HMC is a lower pin count, high-speed, serial interface so 6 channels may be much more reasonable. I believe an HMC channel is only 8 or 16-bit serial interface (similar to PCI-e) rather than a 64-bit parallel interface like DDR. It would still take a lot of pins, but much less than wide, parallel interfaces and they are also much easier to route. I am also wondering if they are planning on connecting NVRAM via the HMC interface. HMC decouples the memory from the interface to some extent, so it may be possible with custom flash die. It will be interesting if they stack flash die with TSVs.
Yawn. Fujitsu has had more
Yawn. Fujitsu has had more sophisticated stuff up and running since last year using HMC and on chip fabric with optical interconnects.
http://www.fujitsu.com/global/products/computing/servers/supercomputer/primehpc-fx100/
Really The Intel-O-files over
Really The Intel-O-files over at Tech Report are ooh and awe-ing the core/thread count on that 28 core Xeon, but still a 12 core power8 has 96 processor treads, relative to any non Xeon-pi SKU. 28 cores with 56 threads is not all that spectacular relative to SPARC also. And once again, the non enterprise server, enthusiast’s websites are not doing any substantive comparison and contrasts with enterprise server benchmarks, so it will take some real reading at the enterprise server websites to see how Intel’s new kit fairs relative to the other makers of server SKUs. Let’s see how much things change over the next few years, with all the openpower power8s, and IBM’s power9s, and Zen based HPC APUs(HBM memory and Greenland GPU cores) that are going to be entering the market. This is all in 2017, so expect more competition, and Google will probably by that time be starting to utilize Power8’s, prices for server/HPC SKUs will be dropping as a result of increasing competition, and a good chuck of openpower licensees are based in China, ARM style IP licensing is coming to the server/HPC market via the OpenPower, as well as other IP licensing companies(MIPS, ARM, Other).