According to Reuters, Japan’s Ministry of Economy, Trade and Industry have set aside 19.5 billion yen to build a high-end supercomputer. This will translate into 130 PetaFLOPs, which would put it ahead of all other announced clusters. The article claims that the government will rent the computer out to Japanese corporations, many of which currently use American-based cloud services.
The supercomputer has been named ABCI: AI Bridging Cloud Infrastructure.
Image Credit: つ via Wikipedia
From a hardware standpoint? There’s not a whole lot else to say about it. The money has been set aside, but no-one has been selected to build it. Companies will submit their bids by December 8th, and we assume they’ll make an announcement at some point after.
This also means we don’t know what is planned to go into each node. Despite targeting ABCI at AI, Japan is sticking to the “FLOPs” rating, and thus will probably be focused on floating-point workloads. It would be weird to see such an expensive machine be focused on 8- or 16-bit instructions, but then we see Google creating custom ASICs, called TPUs, that seem to get huge performance boosts by sticking to low-precision workloads. Could that even scale to a competitive supercomputer? Or would it cut out too many potential customers that need 32- and 64-bit precision?
Either way, I would guess that this computer will use more conventional, GPU-style co-processors from someone like Intel (Xeon Phi) or NVIDIA. Really, we don’t know, though. No-one does at this point. It’s an interesting branding, though.