According to Reuters, Japan’s Ministry of Economy, Trade and Industry have set aside 19.5 billion yen to build a high-end supercomputer. This will translate into 130 PetaFLOPs, which would put it ahead of all other announced clusters. The article claims that the government will rent the computer out to Japanese corporations, many of which currently use American-based cloud services.
The supercomputer has been named ABCI: AI Bridging Cloud Infrastructure.
Image Credit: つ via Wikipedia
From a hardware standpoint? There’s not a whole lot else to say about it. The money has been set aside, but no-one has been selected to build it. Companies will submit their bids by December 8th, and we assume they’ll make an announcement at some point after.
This also means we don’t know what is planned to go into each node. Despite targeting ABCI at AI, Japan is sticking to the “FLOPs” rating, and thus will probably be focused on floating-point workloads. It would be weird to see such an expensive machine be focused on 8- or 16-bit instructions, but then we see Google creating custom ASICs, called TPUs, that seem to get huge performance boosts by sticking to low-precision workloads. Could that even scale to a competitive supercomputer? Or would it cut out too many potential customers that need 32- and 64-bit precision?
Either way, I would guess that this computer will use more conventional, GPU-style co-processors from someone like Intel (Xeon Phi) or NVIDIA. Really, we don’t know, though. No-one does at this point. It’s an interesting branding, though.
Maybe another ARM and Fujitsu
Maybe another ARM and Fujitsu custom ARM project with more 8 and 16 bit functionality/extensions similar what Fujitsu and ARM did in creating the new scalable vector extension(SVE) instructions that work from 128-bits to 2048-bits in 128-bit increments! Except with maybe some new ARM ISA instruction extentions that work better with 8 and 16 bit AI oriented workloads. You know that Fujitsu is moving away from SPARC processors to ARMv8A ISA based custom designs for the Japanese post K exaflops computer.
I would not rule out maybe Fujitsu and ARM Holdings(Softbank) maybe even using some of the new ARM Mali/bifrost GPU IP beefed up to doe more 8 and 16 bit acceleration workloads, Mali/Bifrost has 8 and 16 bit FP and 32 bit FP for mobile graphics workloads and acceleration workloads.
“The Bifrost GPU architecture
and the ARM Mali-G71 GPU
Jem Davies
ARM Fellow and VP of Technology”
http://www.hotchips.org/wp-content/uploads/hc_archives/hc28/HC28.22-Monday-Epub/HC28.22.10-GPU-HPC-Epub/HC28.22.110-Bifrost-JemDavies-ARM-v04-9.pdf
P.S. Scott Please do not
P.S. Scott Please do not froget that AMD has some new GPU IP competing in the HPC/Workstation/server market also and AMD has been getting some new business in that market also with the Chinese and even Google.
“AMD stock rises as company enters Deep Learning with Google”
http://www.kitguru.net/components/graphic-cards/matthew-wilson/amd-stock-rises-as-company-enters-deep-learning-with-google/
Yeah. It would be really nice
Yeah. It would be really nice to see AMD GPUs get into the supercomputer space, especially with their OpenCL performance.
Also Scott the Hot chips
Also Scott the Hot chips Denver/Parket presentation in now online:
“INTRODUCING “PARKER”
Next-Generation Tegra System-On-Chip
Andi Skende | Distinguished Engineer, “Parker” Lead Architect”
http://www.hotchips.org/wp-content/uploads/hc_archives/hc28/HC28.22-Monday-Epub/HC28.22.30-Low-Power-Epub/HC28.22.322-Tegra-Parker-AndiSkende-NVIDIA-v01.pdf
Cool. Thanks!
Cool. Thanks!
All the Hot chips
All the Hot chips PDF/presentations from Hot Chips 2016 are now accessable over at the Hot Chips website so they can now be viewed with no login required. The one on the power9 CPU is a very nice read, as are some from Intel, AMD and others! There are may good reads from the many Hot Chips presentations this year.
Wouldn’t conventional be an
Wouldn’t conventional be an IBM/Nvidia rig? Intel got booted from this space long ago, and AMD? I wouldn’t bet on it.
AMD is getting business in
AMD is getting business in China(server CPU and GPU) and with Google for GPU accelerators, and maybe AMD’s Zen with the Price/Performance metric considered may even get AMD some Google x86/GPU accelerator business.
One thing to consider about AMD’s GPU and Zen CPU pricing latitude is that Nvidia can not price anything but its GPU SKUs while AMD will be able to offer some Server/HPC/Workstation Zen CPU and Radeon Pro WX GPU accelerator package deals to the Server/HPC/Workstation market. So look at AMD’s Price/Performance metric and AMD’s pricing latitude over the whole Server/HPC/Workstation package deal for CPU/GPU/Motherboard chip sets for that professional market’s business.
Be also aware that AMD is in with OpenCAPI so in addition to OpenCAPI/CAPI2 support for AMD’s GPU, any maybe CPU, SKUs. The Power9 processor SKUs will support OpenCAPI/CAPI2 from IBM and many third party OpenPower/OpenCAPI third party Power9 licensees SKUs will also support CAPI2. So AMD will have GPUs that will interface with the Power9 hardware ecosystem in a more standardized than NVLink manner. IBM is no dummy with respect to limiting themselves and the many OpenPower licensees to Nvidia’s proprietary NVLink interconnect technology only!
“OpenCAPI Unveiled: AMD, IBM, Google, Xilinx, Micron and Mellanox Join Forces in the Heterogenous Computing Era”
http://www.anandtech.com/show/10759/opencapi-unveiled-amd-ibm-google-more
I highly doubt about AMD’s
I highly doubt about AMD’s server CPU part since there aren’t any announcements from companies about using AMD’s Opterons or AMD’s upcoming Naples server CPUs. At the recent SC16 conference, AMD mainly presented their ROCm while Zen was rarely talked about https://www.youtube.com/watch?v=RVzjSCelZo4 There isn’t any AMD Naples server protoype at the show as well. Many of the other HPC vendors didn’t even speak about any upcoming systems using AMD chips (ie. mostly either Intel, NVIDIA or IBM, for example https://www.youtube.com/watch?v=RFpcExgTHEk ). As for OpenCAPI, that will benefit mainly IBM and NVIDIA only. Just look at their other collaboration effort https://www.top500.org/news/ibm-nvidia-team-on-enterprise-ready-deep-learning-solution/ That one very much similar to the new OpenCAPI platform. This is very much like a repeat of AMD’s involvement with Facebook’s OpenCompute, which in the end came to nothing.
References/links go at the
References/links go at the bottom with each link Video’s/Article’s Title(Included by you in your post) in quotes and numbered if more than one. And most companies in the testing phase do not announce their design choices until their final contracts are made/signed.
Also AMD’s business with China and with Google may not be of the supercomputer variety that may warrant discussion at SC16! And the Server/Workstation markets are a larger source of revenues than the HPC/Supercomputer markets to begin with. The are some cloud computing services providers that have many more computing resources available for number crunching than the largest Supercomputing sites have available it’s just that the cloud services providers may not have all the power in one location/site. Also if you really read the post that you replied to it makes reference to the “Server/HPC/Workstation” Market and not just any potential for supercomputer business alone.
Please make reference to this article below for a 32 core HPC APU, and remember. AMD may even be adding some FPGA compute into the HBM2 stacks also for that exascale system APU on an interposer if some of AMD’s patent filings are noted. So it’s not just AMD CPU only Zen Server/HPC/Workstation SKUs that will be first to the market, it’s the other projects like that HPC APU on an interposer that will be coming online 2017-2018 also! Look for the APUs on an Interposer designs form AMD to also be made into consumer variants also.
“Pondering AMD’s Ambitions for High-Performance APUs”
https://www.top500.org/news/pondering-amds-ambitions-for-high-performance-apus/
Really you are being quite
Really you are being quite daft, here is the reason for OpenCAPI and other industry interconnect standards based interconnect projects. Look AMD is a founding member of OpenCAPI, and Nvidia is there also at a more assoicate member level hedeging their bets because NVLink will only be used by a few. Why pay for Nvidia’s costly interconnect IP when there will be other less costly open standards available.
Be sure to go a read this article and read the its linked to articles to get a better view of things! Go read the Hot Chips Power9 PDF/presentation also, it’s a very good read and CAPI2 will be available under the Power9s for AMD and others to use to wire up accelorators to IBM’s and any third party Power9 licensees(Google/others) servers to make use of AMD’s and others GPU accelorators for any Power9 based systems.
“Why OpenCAPI is a declaration of interconnect fabric war”
http://www.theregister.co.uk/2016/10/14/opencapi_declaration_of_interconect_war/
You are quite misinformed.
You are quite misinformed. Fujitsu’s Oakforest-PACS, recent entry now ranked 6th in the Top500 https://www.top500.org/lists/2016/11/ and ranked 5th in the Green500 https://www.top500.org/news/green500-reaches-new-heights-in-energy-efficient-supercomputing/ plus ranked 3rd in HPCG http://www.hpcg-benchmark.org/custom/index.html?lid=155&slid=289 is actually an all Intel Xeon Phi machine. In fact many of the new entries in the Top500 (check the Top500 list) are Intel Xeon Phi powered systems.
Nah. The second fastest
Nah. The second fastest supercomputer in the world, Tianhe-2, is based on Xeon E5s with Xeon Phi co-processors. Intel's still in the game, despite NVIDIA's design wins.
I am curious as to how much
I am curious as to how much work it is to program for these different architectures. From what I have heard, it sounds like programming with Nvidia CUDA requires a lot of low level hardware management. I don’t know what programming for Xeon Phi looks like though. It doesn’t seem like it would require so much low level resource management since it is really AMD64 instruction set with more powerful vector exstensions. I guess it still may require low level work for good optimization. I don’t have that much knowledge of it, but it seems like AMD may be somewhere in the middle, with more hardware based scheduling, although hardware vs. software based scheduling isn’t necessarily that visible to the programmer.
I would expect Nvidia based systems to perform better, but they may require a lot more programming work to reach that level of performance. Some applications may be naturally more suited to more GPU like architectures or more CPU like architectures. Havd you worked with all three? Does Intel have an advantage with ease of programming? I would expect AMD to be at a distinct disadvantage in that they have to depend on mostly open source, non-proprietary tools. They don’t have the resources to put into compiler developement like Nvidia and Intel.
Intel is going to be getting
Intel is going to be getting the most competition it has ever had to worry about starting in 2017 and beyond. Yes Intel has a large x86 based market and it has some software stack advantages as the incumbent. But that just makes things all the more better for AMD and Zen at first to pair its Zen CPUs with Radeon Pro WX GPUs in a package deal to compete with any of Intel’s current x86 based customers across all markets. There will be the IBM and Third party OpenPower Power9 licensees that will have the option of going Power9/Nvidia NVLink and going with Nvidia’s GPUs at a little bit more Nvidia vendor Lock-in, or going with OpenPower Power9s and Using OpenCAPI/CAPI2 and pairing their power9’s with Radeon Pro WX GPUs and/or other accelerators that make use of the OpenCAPI/CAPI2 interconnect IP. AMD is a founding member of OpenCAPI so good work there Lisa in getting AMD’s foot in that power9 market’s door.
One other question about this
One other question about this supercomputer! Will is be able to use Blender to create a 3D model that can be sent to a 3D printer that can actually print/render in 3D an actual working Blender that can be used to answer that often asked question: will it Blend?
That’s a bit of a stretch. :p
That's a bit of a stretch. :p
Japan has also set aside $1
Japan has also set aside $1 billion USD for the replacement for the $1.3 billion USD K computer as well.