Big accelerator Memory, An NVIDIA And IBM Lead Project To Connect Your GPU To Your SSD
BaM! Just Like That The CPU Is Kicked Out Of The Loop
Ah bottlenecks, the bane of gamers and professionals everywhere, not to mention the programmers who try to code around them. Over the years the location of the bottleneck has changed and for some applications it is now the CPU which is slowing down the execution of code. IBM, NVIDIA and several universities have collaborated on a new project called Big accelerator Memory to resolve this issue, designing a way for your GPU or other hardware accelerator to communicate directly with your SSD storage without needing any hops through the once mighty CPU.
In order to achieve this they needed to abandon traditional solutions such as virtual address translation, and page-fault-based on-demand loading of data and design their own protocols. BaM runs on a custom built Linux kernel which supports the two major pieces of software that are the heart of Big accelerator Memory, a software managed cache of your GPU’s memory, and a software library which enables your GPU threads to request data directly from and NVMe SSDs. Their tests show a vast reduction in overhead, timing delays and even an immunity to translation lookaside buffer errors.
The software and hardware are to be released under an open source license some time in the future, for now you will have to content yourself with the information posted over at The Register.
Nvidia, IBM, and university collaborators have a developed an architecture they say will provide fast fine-grain access to large amounts of data storage for GPU-accelerated applications, such as analytics and machine-learning training.