NVIDIA’s Hopper Architecture Arrives with 80 Billion Transistor H100 GPU

Source: NVIDIA NVIDIA’s Hopper Architecture Arrives with 80 Billion Transistor H100 GPU

“An order of magnitude performance leap over its predecessor”

NVIDIA was very busy at GTC 2022 yesterday, announcing – among other things – its next GPU architecture, Hopper. Also announced was the first GPU powered by NVIDIA Hopper architecture, the H100, which boasts a staggering 80 billion transistors, and is being called “the world’s largest and most powerful accelerator” by NVIDIA.

Hopper Arch - H100 Die - Image

NVIDIA lists numerous “technology breakthroughs” for the H100, reproduced from the company’s press release below:

The NVIDIA H100 GPU sets a new standard in accelerating large-scale AI and HPC, delivering six breakthrough innovations:

World’s Most Advanced Chip — Built with 80 billion transistors using a cutting-edge TSMC 4N process designed for NVIDIA’s accelerated compute needs, H100 features major advances to accelerate AI, HPC, memory bandwidth, interconnect and communication, including nearly 5 terabytes per second of external connectivity. H100 is the first GPU to support PCIe Gen5 and the first to utilize HBM3, enabling 3TB/s of memory bandwidth. Twenty H100 GPUs can sustain the equivalent of the entire world’s internet traffic, making it possible for customers to deliver advanced recommender systems and large language models running inference on data in real time.

New Transformer Engine — Now the standard model choice for natural language processing, the Transformer is one of the most important deep learning models ever invented. The H100 accelerator’s Transformer Engine is built to speed up these networks as much as 6x versus the previous generation without losing accuracy.

2nd-Generation Secure Multi-Instance GPU — MIG technology allows a single GPU to be partitioned into seven smaller, fully isolated instances to handle different types of jobs. The Hopper architecture extends MIG capabilities by up to 7x over the previous generation by offering secure multitenant configurations in cloud environments across each GPU instance.

Confidential Computing — H100 is the world’s first accelerator with confidential computing capabilities to protect AI models and customer data while they are being processed. Customers can also apply confidential computing to federated learning for privacy-sensitive industries like healthcare and financial services, as well as on shared cloud infrastructures.

4th-Generation NVIDIA NVLink — To accelerate the largest AI models, NVLink combines with a new external NVLink Switch to extend NVLink as a scale-up network beyond the server, connecting up to 256 H100 GPUs at 9x higher bandwidth versus the previous generation using NVIDIA HDR Quantum InfiniBand.

DPX Instructions — New DPX instructions accelerate dynamic programming — used in a broad range of algorithms, including route optimization and genomics — by up to 40x compared with CPUs and up to 7x compared with previous-generation GPUs. This includes the Floyd-Warshall algorithm to find optimal routes for autonomous robot fleets in dynamic warehouse environments, and the Smith-Waterman algorithm used in sequence alignment for DNA and protein classification and folding.

Hopper Arch - H100 CNX - Image

H100 will come in SXM and PCIe form factors to support a wide range of server design requirements. A converged accelerator will also be available, pairing an H100 GPU with an NVIDIA ConnectX®-7 400Gb/s InfiniBand and Ethernet SmartNIC.

NVIDIA’s H100 SXM will be available in HGX™ H100 server boards with four- and eight-way configurations for enterprises with applications scaling to multiple GPUs in a server and across multiple servers. HGX H100-based servers deliver the highest application performance for AI training and inference along with data analytics and HPC applications.

The H100 PCIe, with NVLink to connect two GPUs, provides more than 7x the bandwidth of PCIe 5.0, delivering outstanding performance for applications running on mainstream enterprise servers. Its form factor makes it easy to integrate into existing data center infrastructure.

The H100 CNX, a new converged accelerator, couples an H100 with a ConnectX-7 SmartNIC to provide groundbreaking performance for I/O-intensive applications such as multinode AI training in enterprise data centers and 5G signal processing at the edge.

NVIDIA Hopper architecture-based GPUs can also be paired with NVIDIA Grace™ CPUs with an ultra-fast NVLink-C2C interconnect for over 7x faster communication between the CPU and GPU compared to PCIe 5.0. This combination — the Grace Hopper Superchip — is an integrated module designed to serve giant-scale HPC and AI applications.

NVIDIA Grace Superchip - Image

Rather than continuing to read large quotes from NVIDIA here, you can venture over to NVIDIA’s Hopper Architecture announcement, and read more about the H100, the DGX H100, the H100 CNX, and the Grace Hopper Superchip (among other things not linked here).

Video News

About The Author

Sebastian Peak

Editor-in-Chief at PC Perspective. Writer of computer stuff, vintage PC nerd, and full-time dad. Still in search of the perfect smartphone. In his nonexistent spare time Sebastian's hobbies include hi-fi audio, guitars, and road bikes. Currently investigating time travel.

Leave a reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest Podcasts

Archive & Timeline

Previous 12 months
Explore: All The Years!