MLPerf has emerged into an industry-standard benchmark for deep learning workloads. The number of interested submitters and submissions are growing. This round of submission establishes a new record with over 5,300 performance results and 2,400 power measurement results. These results show a thirty percent and nine percent improvement in submissions respectively for performance and power compared to the MLPerf Inference v2.0 round.
While MLPerf not only provides a way to compare different systems like-to-like, it also helps customers to refer to performance-optimized code from vendors and OEMs. These resources are another reason that supports for the growth of MLPerf as an industry standard benchmarking platform.