Dell Technologies engineers configured the application nodes with Nvidia Tesla T4 GPU and the storage nodes with Nvidia Tesla V100 GPU for accelerated computation of complex ML/DL workloads. The Nvidia T4 GPU is based on the new Turing architecture and packaged in an energy-efficient 70-watt, small PCIe form factor. We installed a single in each application node. The Nvidia Tesla T4 GPU is optimized for mainstream computing environments, including D/L training and inference, and includes multi-precision Turing Tensor Cores and new RT Cores to deliver up to 65 teraFLOPs of mixed-precision compute power for accelerating ML/DL workloads
The storage nodes were operated in hyperconverged mode. Each storage node was installed with a single Nvidia Tesla V100 GPU. The offer up to 112 teraFLOPs of mixed-precision compute capability in a single GPU.