Advanced Computing in the Age of AI | Friday, March 29, 2024

Latest MLPerf Results Display Gains for All 

SAN FRANCISCO, Nov. 9, 2022 — Today, MLCommons, an open engineering consortium, announced new results from the industry-standard MLPerf Training, HPC and Tiny benchmark suites. Collectively, these benchmark suites scale from ultra-low power devices that draw just a few microwatts for inference all the way up to the most powerful multi-megawatt data center training platforms and supercomputers. The latest MLPerf results demonstrate up to a 5X improvement in performance helping deliver faster insights and deploy more intelligent capabilities in systems at all scales and power levels.

The MLPerf benchmark suites are comprehensive system tests that stress machine learning models including underlying software and hardware and in some cases, optionally measuring energy usage. The open-source and peer-reviewed benchmark suites create a level playing ground for competition, which fosters innovation and benefits society at large through better performance and energy efficiency for AI and ML applications.

The MLPerf Training benchmark suite measures the performance for training machine learning models that are used in commercial applications such as recommending movies, speech-to-text, autonomous vehicles, and medical imaging. MLPerf Training v2.1 includes nearly 200 results from 18 different submitters spanning all the way from small workstations up to large scale data center systems with thousands of processors.

The MLPerf HPC benchmark suite is targeted at supercomputers and measures the time it takes to train machine learning models for scientific applications and also incorporates an optional throughput metric for large systems that commonly support multiple users. The scientific workloads include weather modeling, cosmological simulation, and predicting chemical reactions based on quantum mechanics. MLPerf HPC 2.0 includes over 20 results from 5 organizations with time-to-train and throughput for all models and submissions from some of the world’s largest supercomputers.

The MLPerf Tiny benchmark suite is intended for the lowest power devices and smallest form factors, such as deeply embedded, intelligent sensing, and internet-of-things applications. It measures inference performance – how quickly a trained neural network can process new data and includes an optional energy measurement component. MLPerf Tiny 1.0 encompasses submissions from 8 different organizations including 59 performance results with 39 energy measurements or just over 66% – an all-time record.

“We are pleased to see the growth in the machine learning community and especially excited to see the first submissions from Dell in MLPerf HPC and GreenWaves Technologies, OctoML, and Qualcomm in MLPerf Tiny,” said MLCommons Executive Director David Kanter. “The increasing adoption of energy measurement is particularly exciting, as a demonstration of the industry’s outstanding commitment to efficiency.”

To view the results and find additional information about the benchmarks please visit:

Training: https://mlcommons.org/en/training-normal-21.

HPC: https://mlcommons.org/en/training-hpc-20.

Tiny: https://www.mlcommons.org/en/inference-tiny-10.

About MLCommons

MLCommons is an open engineering consortium with a mission to benefit society by accelerating innovation in machine learning. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding partners – global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.

For additional information on MLCommons and details on becoming a Member or Affiliate of the organization, please visit http://mlcommons.org and contact [email protected].


Source: MLCommons

About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.

EnterpriseAI