Home > AI Solutions > Artificial Intelligence > White Papers > MLPerf™ Inference v1.0 – NVIDIA GPU-Based Benchmarks on Dell EMC PowerEdge R750xa Servers > MLPerf Inference v1.0 performance results
The MLPerf inference benchmark measures how fast a system can perform deep learning inference using a trained model in various deployment scenarios. The following figure represents the Offline and Server scenarios of the MLPerf Inference benchmark with an exponentially scaled y axis:
Key takeaways include:
Note: Due to time constraints, 3D-UNet and DLRM results were not submitted. Figure 5 includes unverified results of these benchmarks.