FP64 Tensor Core: 19.5 TFLOPS
BFLOAT16 Tensor Core: 156 TFLOPS | 312 TFLOPS*
FP64 Tensor Core: 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core: 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core: 624 TOPS | 1248 TOPS*
GPU Memory: 80GB HBM2e
GPU Memory Bandwidth: 2,039 GB/s
Max Thermal Design Power (TDP): 400W ***
Multi-Instance GPU: Up to 7 MIGs @ 10GB
Form Factor: SXM
Interconnect: NVLink: 600 GB/s PCIe Gen4: 64 GB/s
Server Options: NVIDIA HGX™ A100-Partner and NVIDIA-Certified Systems with 4,8, or 16 GPUs NVIDIA DGX™ A100 with 8 GPUs