Based on the modular, scalable architecture for generative AI described earlier and powered by Dell and NVIDIA components, there are initially three system configurations in this family of designs, each focused on a particular use case. The three optimized system configurations are designed for inferencing, customization, and training use cases.
The following sections describe the system configurations for each area of focus at a high level. Note that the control plane, data storage, and Ethernet networking for each case is similar. Therefore, if you are building AI Infrastructure that addresses two or more cases, these core resources can be shared.