Home > AI Solutions > Artificial Intelligence > White Papers > Harness the Power of Dell EMC PowerEdge R7525/R7515 Servers with NVIDIA GPUs for Machine Learning Inference > Overview
MLPerf Inference benchmark v0.7 workloads were executed on two Dell PowerEdge Systems Under Test (SUT); R7515 and R7525 which use AMDTM processors that are accelerated by NVIDIATM GPUs (Tesla T4, Quadro RTX8000, and A100).
The R7515 and R7525 both offer configuration flexibility to address inference performance and data center requirements around power and costs. Inference applications can be deployed on AMD single socket system without compromising accelerator support, storage, and I/O capacities or on double socket systems with configurations that support higher capabilities. Both platforms support PCIe Gen4 links for latest GPU offerings like the A100 and upcoming Radeon Instinct MI100 GPUs from AMD that are PCIe Gen 4 capable.
In general, Dell PowerEdge platforms offer various PCIe riser options that enable support for multiple low-profile (up to 8xT4) or up to 3xfull-height, double-wide GPU accelerators (RTX or A100). Customers can choose the GPU model and number of GPUs based on the workload requirements and that fit their data center power and density needs.