Home > Storage > PowerScale (Isilon) > Industry Solutions and Verticals > Analytics > Deep Learning with Dell EMC Isilon > GPU
The NVIDIA Tesla V100 is the latest data center GPU available to accelerate Deep Learning. Powered by NVIDIA Volta, the latest GPU architecture, Tesla V100 GPUs enable data scientists, researchers, and engineers to tackle challenges that were once difficult. With 640 Tensor Cores, Tesla V100 is the first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance.
Description |
Value |
CUDA Cores |
5120 |
GPU Max Clock Rate (MHz) |
1530 |
Tensor Cores |
640 |
Memory Bandwidth (GB/s) |
900 |
NVLink Bandwidth (GB/s) (uni-direction) |
300 |
Deep Learning (Tensor OPS) |
120 |
TDP (Watts) |
300 |
With the V100-SXM2 model, all GPUs are connected by NVIDIA NVLink. V100-SXM2 GPUs provide six NVLinks per GPU for bi-directional communication. The bandwidth of each NVLink is 25GB/s in uni-direction and all four GPUs within a node can communicate at the same time, therefore the theoretical peak bandwidth is 6*25*4=600GB/s in bi-direction.