NVIDIA

NVIDIA A100 80GB PCIe GPU

The A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.
NVIDIA A100 80GB PCIe GPU

Overview

The NVIDIA® A100 80GB PCIe card delivers unprecedented acceleration to power the world’s highest-performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. NVIDIA A100 Tensor Core technology supports a broad range of math precisions, providing a single accelerator for every compute workload. The NVIDIA A100 80GB PCIe supports double precision (FP64), single precision (FP32), half precision (FP16), and integer (INT8) compute tasks.

The NVIDIA A100 80GB card is a dual-slot 10.5 inch PCI Express Gen4 card based on the NVIDIA Ampere GA100 graphics processing unit (GPU). It uses a passive heat sink for cooling, which requires system airflow to properly operate the card within its thermal limits. The NVIDIA A100 80GB PCIe operates unconstrained up to its maximum thermal design power (TDP) level of 300 W to accelerate applications that require the fastest computational speed and highest data throughput. The latest generation A100 80GB PCIe doubles GPU memory and debuts the world’s highest PCIe card memory bandwidth up to 1.94 terabytes per second (TB/s), speeding time to solution for the largest models and most massive data sets.  

The NVIDIA A100 80GB PCIe card features Multi-Instance GPU (MIG) capability, which can be partitioned into as many as seven isolated GPU instances, providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands. When using MIG to partition an A100 GPU into up to seven smaller instances, A100 can readily handle different-sized acceleration needs, from the smallest job to the biggest multi-node workload. A100 80GB versatility means IT managers can maximize the utility of every GPU in their data center.

NVIDIA A100 80GB PCIe cards use three NVIDIA® NVLink® bridges that allow multiple A100 80GB PCIe cards to be connected together to deliver 600 GB/s bandwidth or 10x the bandwidth of PCIe Gen4, in order to maximize application throughput with the larger workloads. 

Product Specification
  A100 80GB PCIe
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 (TF32)  156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core 624 TOPS | 1248 TOPS*
GPU Memory 80GB HBM2e
GPU Memory Bandwidth 1,935 GB/s
Max Thermal Design Power (TDP) 300W
Multi-Instance GPU Up to 7 MIGs @ 10GB
Form Factor PCIe Dual-slot air-cooled or single-slot liquid-cooled
Interconnect NVIDIA® NVLink® Bridge for 2 GPUs: 600 GB/s **
PCIe Gen4: 64 GB/s
Server Options Partner and NVIDIA-Certified Systems™ with 1-8 GPUs

*NVIDIA Authorized Distributor
twL_marqueepic_22G06_3RNAyzuHfs
Subscribe to EDOM TECH Newsletter
  • Company Name
  • *Name
  • Phone Number
  • *E-mail
Subscribe to EDOM TECH Newsletter
SUBSCRIBE NOW!

Verification

Click the numbers in sequence.

Inquriy Cart

你的洽詢車總計 0 件產品

    Search

    Please Enable cookies to improve your user experience

    This website uses Cookie to provide you with the best experience. By continuing to use our website, you consent to our Cookie Policy.