Home > Storage > PowerScale (Isilon) > Industry Solutions and Verticals > Analytics > Dell and AMD for Deep Learning Analytics > Conclusion
This document presents a high-performance architecture for DL by combining Dell PowerEdge R7525 servers with AMD Instinct MI100 GPUs, Dell PowerSwitch Z9332F-ON switches, and Dell PowerScale F900 all-flash storage. We have discussed the key features of PowerScale that make it a powerful persistent storage for DL solutions. This new architecture extends the commitment from Dell Technologies to make AI simple and accessible to every organization. We offer our customers informed choice and flexibility in how they deploy high-performance DL at scale. Throughout the benchmark process we validated that the PowerScale F900 Scale-Out NAS storage was able to keep pace and linearly scale performance with AMD Instinct MI100 GPUs.
DL algorithms have a diverse set of requirements with various compute, memory, I/O, and disk capacity profiles. That said, the architecture and the performance data points presented in this white paper can be used as the starting point for building DL solutions tailored to varied sets of resource requirements. More importantly, all the components of this architecture are linearly scalable and can be independently expanded to provide DL solutions that can manage tens of PBs of data.
While the solution presented here provides several performance data points and demonstrates the effectiveness of PowerScale in handling large scale DL workloads, there are several other operational benefits of persisting data for DL on PowerScale:
In summary, PowerScale-based DL solutions deliver the capacity, performance, and high concurrency to eliminate I/O storage bottlenecks for AI. This provides a rock-solid foundation for large scale, enterprise-grade DL solutions with a future proof scale-out architecture that meets your AI needs of today and that scales for the future.