The NVIDIA A100 Tensor Core GPU delivers acceleration at every scale for AI, data analytics, and HPC to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale up to thousands of GPUs or, using new Multi-Instance GPU (MIG) technology, can be partitioned into seven isolated GPU instances to accelerate workloads of all sizes. A100’s third-generation Tensor Core technology now accelerates more levels of precision for diverse workloads, speeding time to insight as well as time to market.
Products specifications
Attribute name | Attribute value |
---|
Graphics card memory type | High Bandwidth Memory 2 (HBM2) |
Graphics processor | 8192 |
CUDA cores | 6912 |
Discrete graphics card memory | 40 GB |
CUDA | Y |
Memory bandwidth (max) | 32 GB/s |
Graphics processor family | NVIDIA |
Form factor | Full-Height/Full-Length (FH/FL) |
Product color | Black, Gold |
Data transfer rate | 1.555 Gbit/s |
Cooling type | Passive |
Interface type | PCI Express 4.0 |
Power consumption (typical) | 250 W |