Home > AI Solutions > Gen AI > Guides > Generative AI in the Enterprise with AMD Accelerators > Storage design
Dell PowerScale offers massive AI performance with the ultimate density. It accelerates all phases of the AI pipeline, from model training to inferencing and fine-tuning. Using up to 24 NVMe SSD drives per node and 300 PBs of storage per cluster, it ensures GPU use for large-scale model training and drives faster time to AI insights with up to 127 percent improved throughput[1].
PowerScale is no stranger to AI-optimized infrastructure. It was one of the first storage products to offer low latency storage access with Network File System over Remote Direct Memory Access (NFSoRDMA), multitenant capabilities, simultaneous multiprotocol support, and 6x9s availability and resiliency to ensure uninterrupted uptime.
Building on that foundation and using continuous software and hardware innovation, our next-generation all-flash systems form a key component of Dell’s AI-ready data platform. They offer:
PowerScale’s continuous innovation extends into the AI era with the introduction of the next generation of PowerEdge-based nodes, including the PowerScale F710. The new PowerScale all-flash nodes use Dell PowerEdge 16G servers, unlocking the next generation of performance. Regarding software, the F710 takes advantage of significant performance improvements in PowerScale OneFS 9.7. Combining the latest hardware and software innovations, the F710 can tackle the most demanding workloads with ease.
[1] Disclosure: Based on internal testing, comparing streaming write of F910 on OneFS 9.8 to streaming write of F900 on OneFS 9.5. Results might vary. April 2024
[2] Disclosure: Based on Dell analysis comparing efficiency-related features including data reduction, storage capacity, data protection, hardware, space, life cycle management efficiency, and ENERGY STAR certified configurations, June 2023.