Home > Workload Solutions > High Performance Computing > White Papers > Running ML/DL Workloads Using Red Hat OpenShift Container Platform v3.11 > Application and storage node configuration
Dell EMC engineers configured the application nodes with Nvidia Tesla T4 GPU and the storage nodes with Nvidia Tesla V100 GPU for accelerated computation of complex ML/DL workloads. The Nvidia T4 GPU is based on the new Turing architecture and packaged in an energy-efficient 70-watt, small PCIe form factor. We installed a single Nvidia Tesla T4 GPU in each application node. The Nvidia Tesla T4 GPU is optimized for mainstream computing environments, including D/L training and inference, and features multi-precision Turing Tensor Cores and new RT Cores to deliver up to 65 teraFLOPs of mixed-precision compute power for accelerating ML/DL workloads
The storage nodes were operated in hyperconverged mode. Each storage node was installed with a single Nvidia Tesla V100 GPU. The NVIDIA Tesla V100 GPU accelerators offer up to 112 teraFLOPs of mixed-precision compute capability in a single GPU, enabling data scientists, researchers, and engineers to tackle new challenges.