7 Billion Dollars In Fabric To Wrap GPUs In
It has been just over a year since NVIDIA started the process of acquiring Mellanox, with today marking the successful end of that process. This is a large and interesting step for NVIDIA, as it will enable them to provide HPC servers which do not require an x86 processor, thus sidestepping AMD and Intel when they are building their new products. Instead, thanks to Mellanox’s BlueField they are able to use fabric interconnects to allow GPUs to communicate directly with each other over PCIe 4.0 with ARM processors handling the traffic.
NVIDIA will again be competing against servers built on HPE-Cray’s Slingshot interconnect, which is compatible with both Intel and AMD’s chips. However instead of needing to incorporate silicon from the competition, NVIDIA can now offer servers built completely in house which gives them production and design advantages they were not able to leverage previously for their machine learning products such as visual training for self driving cars.
ServeTheHome points out another advantage to using BlueField fabric for their HPC line, which is that NVIDIA’s CUDA API will be able to run on ARM processors which allows seamless communications between their GPUs and the processors handling the interconnects. We expect to hear more details about that development in NVIDIA’s virtual GTC 2020 Keynote on May 14th.
The acquisition, initially announced on March 11, 2019, unites two of the world’s leading companies in high performance and data center computing. Combining NVIDIA’s leading computing expertise with Mellanox’s high-performance networking technology, the move will enable customers to achieve higher performance, greater utilization of computing resources and lower operating costs.