Dell Technologies provides a diverse selection of acceleration-optimized servers with an extensive portfolio of accelerators featuring NVIDIA GPUs. In this design, we showcase three Dell PowerEdge servers specifically tailored for generative AI purposes:
In this section, we describe the configuration and connectivity options for NVIDIA GPUs, and how these server-GPU combinations can be applied to various LLM use cases.
This design for inferencing supports several options for NVIDIA GPU acceleration components. The following table provides a summary of the GPUs used in this design:
Table 1. NVIDIA GPUs - Technical specification and use case
| NVIDIA H100 SXM GPU | NVIDIA H100 PCIe GPU | NVIDIA L40 PCIe GPU |
Supported latest PowerEdge servers (and maximum number of GPUs) | PowerEdge XE9680 (8) PowerEdge R760xa (4) | PowerEdge R760xa (4) PowerEdge R760 (2) | PowerEdge R760xa (4) PowerEdge R760 (2) |
GPU Memory | 80 GB | 80 GB | 48 GB |
Form factor | SXM | PCIe (dual width, dual slot) | PCIe (dual width, dual slot) |
GPU Interconnect | 900 GB/s PCIe | 600 GB/s NVLink Bridge supported in PowerEdge R760xa 128 GB/s PCIe Gen5 | None |
Multi-instance GPU support | Up to 7 MIGs | Up to 7 MIGs | None |
Decoders | 7 NVDEC 7 JPEG | 7 NVDEC 7 JPEG | 3 NVDEC 3 NVENC |
Max thermal design power (TDP) | 700 W | 350 W | 300 W |
NVIDIA AI Enterprise | Add-on | Included with H100 PCIe | Add-on |
Use cases | Generative AI training Large scale distributed training | Discriminative/Predictive AI Training and Inference Generative AI Inference | Small scale AI Visual computing Discriminative/ Predictive AI Inference |
NVIDIA GPUs support various options to connect two or more GPUs, offering various bandwidths. GPU connectivity is often required for certain multi-GPU applications, especially when higher performance and lower latency are crucial. LLMs often do not fit in the memory of a single GPU and are typically deployed spanning multiple GPUs. Therefore, these GPUs require high-speed connectivity between them.
NVIDIA NVLink is a high-speed interconnect technology developed by NVIDIA for connecting multiple NVIDIA GPUs to work in parallel. It allows for direct communication between the GPUs with high bandwidth and low latency, enabling them to share data and work collaboratively on compute-intensive tasks.
The following figure illustrates the NVIDIA GPU connectivity options for the PowerEdge servers used in this design:
Figure 2. NVIDIA GPU connectivity in PowerEdge servers
PowerEdge servers support several different NVLink options:
The PowerEdge R760xa server supports four NVIDIA H100 GPUs; NVLink bridge can connect each pair of GPUs. The NVIDIA H100 GPU supports NVLink bridge connection with a single adjacent NVIDIA H100 GPU. Each of the three attached bridges spans two PCIe slots for a total maximum NVLink Bridge bandwidth of 600 Gbytes per second.
During inference, an AI model's parameters are stored in GPU memory. LLMs might require multiple GPU memory units to accommodate their entire neural network structure. In such cases, it is necessary to interconnect the GPUs using NVLInk technology to effectively support the model's operations and ensure seamless communication between the GPUs. Therefore, the size of the LLM model required by an enterprise dictates which model of PowerEdge server to choose for the inference infrastructure. The following table provides example LLM models that can be deployed in the PowerEdge servers.
Table 2. Example models supported in PowerEdge servers
Model characteristics | PowerEdge R760xa with H100 PCIe using NVLink Bridge | PowerEdge XE8640 with H100 SXM | PowerEdge XE9680 with H100 SXM |
Total memory available | 320 GB | 320 GB | 640 GB |
Maximum memory footprint of a model that can run | 160 GB | 320 GB | 640 GB |
Example open-source LLMs | NeMo GPT 345M, 1.3B, 2B, 5B and 20B Llama 2B and 13B | All models listed in R760xa | All models listed in XE8640 + Llama 2 70B (available for 8 GPUs) and BLOOM 175B |
For more information about generative AI models that were validated as part of this design, see Table 6.
Note: The preceding table does not consider where a model can span multiple nodes (multiple servers) and if the nodes are interconnected with a high-speed network like InfiniBand.