Home > Storage > PowerScale (Isilon) > Industry Solutions and Verticals > Analytics > Dell Technologies Solution: Distributed Deep Learning Infrastructure for Autonomous Driving > Summary
This document discussed key features of the scale-out NAS Dell PowerScale as a powerful persistent storage solution for ADAS DL solutions. We presented a typical hardware architecture for DL by combining Dell PowerEdge servers with embedded NVIDIA Volta GPUs and all-flash PowerScale storage. We ran several object detections and reported system performance based on the rate of images processed and throughput profile of I/O to disk. We also monitored and reported the CPU, GPU utilization and memory statistics that demonstrated that the server, GPU and memory resources were fully utilized while the Dell PowerScale was still capable of providing more IOs.
DL algorithms have a diverse set of requirements with various compute, memory, I/O and disk capacity profiles. That said, the architecture and the performance data points presented in this whitepaper can be utilized as the starting point for building DL solutions tailored to varied set of resource requirements. More importantly, all the components of this architecture are linearly scalable and can be expanded to provide DL solutions that can manage 10s to 1000s petabytes of data.
While the solution presented here provides several performance data points and speaks to the effectiveness of PowerScale in handling large scale DL workloads, there are several other operational benefits of persistent data for DL on PowerScale:
In summary, PowerScale based DL solutions deliver the capacity, flexibility performance, and high concurrency to eliminate the I/O storage bottlenecks for AI. This provides a solid foundation for large scale, enterprise-grade DL solutions with a future proof scale-out architecture that meets your AI needs of today and scales for the future.