Double The Channels, And Introduce The World To Pseudo Channels
The specifications of HBM3 are far more than just increased bandwidth, as it introduces several new features to the newest incarnation of this memory technology. First off are the bandwidth numbers, with HBM3 now offering double the per-pin data rate of HBM2, hitting 6.4Gbps, or 819GBps per device. As The Register points out, this matches the specs we saw from SK Hynix last year and represents an impressive performance jump for memory dependant applications.
This jump was accomplished thanks to two innovations in the design of HBM3. The first is the doubling of independent physical channels to 16, but the second is a little more unexpected. A way has been found to include two virtual channels in each of the physical ones, or virtual channels as JEDEDC refers to them. This means that there are actually 32 available channels, assuming your application can handle them.
You will see 4-layer, 8-layer and 12-layer stacks similar to HBM2 as well as a 16-layer some time in the near future. One thing these stacks will have that the previous generation lacked is on-die symbol-based ECC, and real-time error reporting and transparency. It will also be more effective in systems with limited power availability, as HBM3 will run at 1.1V.
For those of you that recall reading these specs before and wondering why it is in the news again, there are two reasons. First and foremost is that SK Hynix is now shipping product, though likely in small amounts as shortages continue to plague us all. The second reason is a little more boring; the HBM3 specifications may have been known previously but it wasn’t until Friday that JEDEC officially published them.
HBM is a high-performance memory type that uses vertically stacked memory chips that are typically mounted on the same substrate close to a CPU or GPU.