Subscribe to EDOM TECH Newsletter
NVIDIA
NVIDIA H100 PCIe - Tensor Core GPU
H100
Extraordinary performance, scalability, and security for every data center.

An Order-of-Magnitude Leap for Accelerated Computing
Tap into exceptional performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU. With the NVIDIA NVLink™ Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads. The GPU also includes a dedicated Transformer Engine to solve trillion-parameter language models. The H100’s combined technology innovations can speed up large language models (LLMs) by an incredible 30X over the previous generation to deliver industry-leading conversational AI.
Product Specifications
| Form Factor | H100 PCIe |
|---|---|
| FP64 | 26 teraFLOPS |
| FP64 Tensor Core | 51 teraFLOPS |
| FP32 | 51 teraFLOPS |
| TF32 Tensor Core | 756 teraFLOPS2 |
| BFLOAT16 Tensor Core | 1,513 teraFLOPS2 |
| FP16 Tensor Core | 1,513 teraFLOPS2 |
| FP8 Tensor Core | 3,026 teraFLOPS2 |
| INT8 Tensor Core | 3,026 TOPS2 |
| GPU memory | 80GB |
| GPU memory bandwidth | 2TB/s |
| Decoders | 7 NVDEC 7 JPEG |
| Max thermal design power (TDP) | 300-350W (configurable) |
| Multi-Instance GPUs | Up to 7 MIGS @ 10GB each |
| Form factor | PCIe dual-slot air-cooled |
| Interconnect | NVLink: 600GB/s PCIe Gen5: 128GB/s |
| Server options | Partner and NVIDIA-Certified Systems with 1–8 GPUs |
| NVIDIA AI Enterprise | Included |
2. With sparsity.
*NVIDIA Authorized Distributor


