Home > Storage > PowerFlex > White Papers > NVIDIA Riva on Red Hat OpenShift with Dell PowerFlex > NVIDIA A100 Tensor Core GPU
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the highest-performing elastic data centers for AI, data analytics, and HPC. This GPU uses the NVIDIA Ampere Architecture. The third generation A100 provides higher performance than the prior generation and can be partitioned into seven GPU instances to dynamically adjust to the shifting demands.
The Tensor Core technology included in the Ampere architecture brings dramatic performance gains to AI workloads. The A100 GPU can achieve high acceleration for inference workloads. This technology provides a significant advantage for the data scientist and the organization. IT professionals also benefit from reduced operational complexity by using a single technology that is easy to onboard and manage across use cases.
The A100 GPU is a dual-slot 10.5 inches PCI Express (PCIe) Gen4 card that is based on the NVIDIA Ampere architecture. It uses a passive heat sink for cooling. The A100 PCIe based GPU supports double precision (FP64), single precision (FP32), and half precision (FP16) compute tasks. It also supports unified virtual memory, and a page migration engine. The A100 GPU is available in 40 GB and 80 GB memory versions.
For more information, see NVIDIA A100 Tensor Core GPU documentation.