NVIDIA

NVIDIA A100 80GB PCIe GPU

A100 提供 40 GB 和 80 GB 的記憶體版本,並在 80 GB 版本上首度推出全球最快速的記憶體頻寬,每秒超過 2 TB (TB/秒),可解決最大的模型和資料集。
NVIDIA A100 80GB PCIe GPU

產品介紹

The NVIDIA® A100 80GB PCIe card delivers unprecedented acceleration to power the world’s highest-performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. NVIDIA A100 Tensor Core technology supports a broad range of math precisions, providing a single accelerator for every compute workload. The NVIDIA A100 80GB PCIe supports double precision (FP64), single precision (FP32), half precision (FP16), and integer (INT8) compute tasks.

The NVIDIA A100 80GB card is a dual-slot 10.5 inch PCI Express Gen4 card based on the NVIDIA Ampere GA100 graphics processing unit (GPU). It uses a passive heat sink for cooling, which requires system airflow to properly operate the card within its thermal limits. The NVIDIA A100 80GB PCIe operates unconstrained up to its maximum thermal design power (TDP) level of 300 W to accelerate applications that require the fastest computational speed and highest data throughput. The latest generation A100 80GB PCIe doubles GPU memory and debuts the world’s highest PCIe card memory bandwidth up to 1.94 terabytes per second (TB/s), speeding time to solution for the largest models and most massive data sets.  

The NVIDIA A100 80GB PCIe card features Multi-Instance GPU (MIG) capability, which can be partitioned into as many as seven isolated GPU instances, providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands. When using MIG to partition an A100 GPU into up to seven smaller instances, A100 can readily handle different-sized acceleration needs, from the smallest job to the biggest multi-node workload. A100 80GB versatility means IT managers can maximize the utility of every GPU in their data center.

NVIDIA A100 80GB PCIe cards use three NVIDIA® NVLink® bridges that allow multiple A100 80GB PCIe cards to be connected together to deliver 600 GB/s bandwidth or 10x the bandwidth of PCIe Gen4, in order to maximize application throughput with the larger workloads. 


產品規格

  A100 80GB PCIe
FP64 9.7 兆次浮點運算
FP64 Tensor 核心 19.5 兆次浮點運算
FP32 19.5 兆次浮點運算
Tensor Float 32 (TF32)  156 兆次浮點運算 | 312 兆次浮點運算*
BFLOAT16 Tensor 核心 312 兆次浮點運算 | 624 兆次浮點運算*
FP16 Tensor 核心 312 兆次浮點運算 | 624 兆次浮點運算*
INT8 Tensor Core 624 兆次浮點運算 | 1248 兆次浮點運算*
GPU 記憶體 80GB HBM2e
GPU 記憶體頻寬 1,935 GB/s
最大散熱設計功耗 (TDP) 300W
多執行個體 GPU 最多 7 個 MIGs @ 10GB
尺寸規格 PCIe 雙槽風冷或單槽水冷
互連技術 NVIDIA® NVLink® 橋接器,可支援 2 個 GPU:每秒 600GB **
第四代 PCIe : 每秒 64GB
伺服器選項 合作夥伴提供的 NVIDIA 認證系統™,搭載 1-8 個 GPU

*NVIDIA Authorized Distributor
banner_ins
訂閱電子報,掌握最新科技與產業趨勢
訂閱電子報,掌握最新科技與產業趨勢
我要訂閱

數字驗證

請由小到大,依序點擊數字

洽詢車

你的洽詢車總計 0 件產品

    搜尋

    偵測到您已關閉Cookie,為提供最佳體驗,建議您使用Cookie瀏覽本網站以便使用本站各項功能

    本網站使用Cookie為您提供最佳的使用體驗。繼續使用本網站,即表示您同意我們的Cookie Policy