Home > Storage > PowerScale (Isilon) > Industry Solutions and Verticals > Analytics > Deep Learning with Dell EMC Isilon > Conclusion
This document discussed key features of Isilon that make it a powerful persistent storage for Deep Learning solutions. We presented a typical hardware architecture for deep learning by combining Dell EMC C4140 servers with embedded NVIDIA Volta GPUs and all-flash Isilon storage. We ran several image classification benchmarks and reported system performance based on the rate of images processed and throughput profile of IO to disk. We also monitored and reported the CPU, GPU utilization, and memory statistics that demonstrated that the server, GPU, and memory resources were fully utilized while IO was not fully saturated.
Deep Learning algorithms have a diverse set of requirements with various compute, memory, IO, and disk capacity profiles. That said, the architecture and the performance data points presented in this whitepaper can be utilized as the starting point for building deep learning solutions tailored to varied set of resource requirements. More importantly, all the components of this architecture are linearly scalable and can be expanded to provide deep learning solutions that can manage 10s of PBs of data.
While the solution presented here provides several performance data points and speaks to the effectiveness of Isilon in handling large scale Deep Learning workloads, there are several other operational benefits of persisting data for deep learning on Isilon:
In summary, Isilon based Deep Learning solutions deliver the capacity, performance, and high concurrency to eliminate the I/O storage bottlenecks for AI. This provides a solid foundation for large scale, enterprise-grade deep learning solutions with a future proof scale-out architecture that meets your AI needs of today and scales for the future.