Advanced Computing in the Age of AI | Friday, March 29, 2024

Nvidia GPUs Stay in Lead In Latest MLPerf Inference Results, but CPUs and Intel Gaining Ground 

Nvidia again dominated the latest round of MLPerf inference benchmark (v 1.1) results when they were unveiled Sept. 23 (Thursday), sweeping the top spots in the closed data center and edge categories. And in an interesting show of progress, Intel demonstrated x86 competence for inferencing and Arm chips even showed up in the data center results, not just in the edge category. Even, IBM, though not an MLPerf participant, is also jumping into the host CPU-as-inference engine camp. Can AMD be far behind?

Make no mistake, Nvidia GPUs remain the kings of pure-play AI workloads. They have dominated the MLPerf benchmarks (training and inference) since the start in 2018. “We run every workload, every scenario, every use case for both data center and edge in MLPerf. And we are actually the only company that does that,” said David Salvator, senior product manager, inference and cloud, Nvidia, in a pre-briefing yesterday.

 

That said, there seems to be a new dynamic arising around the use of host CPUs for inferencing. The basic idea is that more modest inferencing demands are proliferating inside many general-purpose workloads, where the host CPU has sufficient capability for the job and is less expensive. CPU vendors are betting this will represent a substantial market. This approach contrasts with the current accelerator-driven paradigm in which AI workloads are offloaded to an accelerator, most often a GPU.

Jordan Plawner, director of AI products and business for Intel, said his company is working to present new messaging about its chips.

“Nvidia has been great at saying you must use an Nvidia GPU to run AI, period, full stop," said Plawner. "We are just trying to enlarge that conversation and say there’s many, many use cases [that don’t require a GPU]. My first job is to make sure I never give a developer an excuse to not use a Xeon processor because something is not working well, because something was optimized for GPUs, [and] because that’s what everyone’s been doing. [We] have a few hundred people making sure that everything works well out of the box on Xeon."

For customers and workloads, it is all about use cases, said Plawner.

“So inferencing, specifically, is 80 percent of the time just an underlying function," he said. "The typical kind of use case is live-streaming inferencing in which there’s a web tier app, it’s always making decisions or recommendations of some kind. Someone – a business or a consumer – is pinging it with a request or on a thread and I’m making a decision to use AI to augment [or] automate the decision or to make a recommendation, and [doing that] is only going to [require] so many threads and requests on that system a second."

Intel is talking about when AI is in the workload or as part of the workload, but not the full workload itself, said Plawner. "The mission of Xeon is just to kind of close that gap with the accelerators, and meet the SLA of the customer, who says, 'I need to infer speech translation, images, by x number of inferences per second, per server, and I only have six cores to do it.' We do this all the time and say we can do that in two cores or four cores.”

In the latest results, Intel broadly demonstrated significant improvement gen-over-gen (Ice Lake over Cooper Lake) in the Xeon line. Plawner cited Intel’s DL Boost including vector neural network instructions (VNNI) in its INT8-based submissions on all workloads as drivers of performance gains. He said Intel expects a many-fold jump in inference performance with Sapphire Rapids and that Intel is working with systems makers to participate in future MLPerf inference runs. Presumably, it will do the same with its forthcoming Ponte Vecchio GPU. Intel has posted a blog with more detailed MLPerf results.

MLCommons, the parent organization for MLPerf, reported that the latest inference benchmark round received submissions from 20 organizations and released more than 1,800 peer-reviewed performance results for machine learning systems spanning from edge devices to data center servers. This is the second round of MLPerf Inference results to offer power measurement, with over 350 power results. The MLPerf inference benchmark is run twice yearly.

Submitters in this MLPerf inference round included Alibaba, Centaur Technology, cTuning, Dell, EdgeCortix, Fujitsu, FuriosaAI, Gigabyte, HPE, Inspur, Intel, Krai, Lenovo, LTechKorea, Nettrix, Neuchips, Nvidia, OctoML, Qualcomm, and Supermicro.

Given the range of tests and system configurations, it is difficult to draw easy comparisons between systems. The devil is in the details and requires digging them out. Secondly, other AI accelerators are largely lacking. The addition of power metrics to the inference exercise starting last spring is seen as a plus although entries in that category were down.

Making sense of the mass of numbers can be confusing. As an example, the top performer on the ResNet workload was Qualcomm cloud (Gigaybte server using AMD CPU and AI 100 PCIe/HHHL) with 310,064 queries/s. Inspur took the next two spots with an Intel (Xeon 83538) system and AMD (Epyc 7742) CPU servers, each using A100 SXM 80 GB accelerators. They had 288,050 and 280,051 queries/s. Dell: 272,301 q/s. Gigabyte and Supermicro tied (260 q/s), and Qualcomm, Inspur and Nvidia were also top performers at the edge.

System and chip vendors at the pre-briefing reported that more buyers are asking about MLPerf and including it in RFPs which is interesting. To some extent, said David Kanter, MLCommons’ executive director, just completing the exercise can be as important as the score in that it demonstrates all elements of a system are working as stated.

Qualcomm is vigorously promoting its strong showing and its commitment to MLPerf in a blog:  “Qualcomm Technologies has significantly expanded its submission to MLPerf benchmarks. It has doubled the number of platform submissions from Edge to Cloud. The network coverage has expanded to include language processing (BERT) and added SSD ResNet-34 to the vision networks. It total, 82 benchmarks results were submitted, including 36 power results.

“As AI and ML accelerate industry-wide mass deployments, it is becoming very evident that the solutions must offer a better value proposition in addition to highest performance," the post continued. "Inference-per-Second-per-Watt (I/S/W) is emerging as the most important benchmark for deployments that provide the best value-to-service for providers and end users. Qualcomm Technologies has reinforced its leadership in power efficiency with its MLPerf v1.1 submission. On servers configured with 8x Qualcomm Cloud AI 100 accelerators, Qualcomm Technologies has demonstrated highest 197 I/S/W for ResNet-50.”

Overall the steady performance advance being documented by MLPerf still tends to reflect Nvidia’s steady progress, which this time resulted mostly from software advances, said Salvator. He also touted the performance of Arm-based servers, paired with Nvidia GPUs.

“For the first time ever in the industry, we’re delivering data center category results on an Arm-based server," he said. "We worked with Ampere and their Altra CPUs (Q80-30 CPU in a single socket) in a Gigabyte server with an A100 and we’re able to deliver results that are running pretty much neck and neck with a similarly configured x86 server. That represents an important milestone. First, it shows that Arm, as an acceleration platform, can deliver performance just about on par with a similarly configured x86 server. It’s also a statement about the readiness of our software stack to be able to run the Arm architecture in a data center environment.”

He also noted that Nvidia’s standard Triton inference server software delivered nearly as good performance as custom code.

“The basic takeaway here is Triton gets great performance, even relative to highly customized implementations, said Salvator. "[It] also makes it much easier for infrastructure managers to deploy networks, because it’s highly integrated into Kubernetes. You can think of Triton as living at the base of the software stack and supports multiple networks and will allow you to do things like automatic load balancing as well as auto scaling,” said Salvator.

Of note was Nvidia’s leveraging the multi-instance GPU (MIG) capability of the Ampere GPU architecture. Both the A100 (7 instances) and A30 support (4 instances). Nvidia demonstrated the ability to run all seven MLPerf workflows at the same time on an A100 using MIG.

The MLPerf 1.1 inferencing suite includes seven workloads (shown below) covering recommendation, NLP, imaging, and object detection. There was one change from the last exercise – the multiple stream scenario was omitted.

The relative lack of competitors to Nvidia and the limited number of participants overall remains an issue for MLPerf. Google has in the past participated but did not in this inferencing round. Likewise, some of the the newer AI system/accelerator players such as Cerebras and Graphcore have yet to participate. How this will influence MLCommons’ long-term plans is unclear. MLCommons has broader aspirations than just benchmarking.

As described by Kanter, the young organization would also like to play a role in hosting/providing datasets and best practices.

“The Datasets Working Group creates and hosts public datasets that are large, actively maintained, and permissively licensed – especially for commercial use," Kanter told HPCwire. "We aim to develop a center of expertise and supporting technologies that dramatically improves the quality and reduces the cost of new public datasets. We believe that a modest investment in public datasets can have impressive ROI in terms of machine learning innovation and market growth. The Datasets Working Group’s first project is the People’s Speech dataset, an open speech recognition dataset that is approximately 100x larger than existing open alternatives. We are currently validating the utility of the data in preparation for public release.”

The best practices working group looks at opportunities to address common and cross-cutting needs of AI practitioners, he said. "The starting point for this effort is to reduce friction for machine learning by ensuring that models are easily portable and reproducible. This initial starting point is the MLCube project, where we are creating the source code and specifications to achieve this.”

Kanter described MLCube as a shipping container that enables researchers and developers to easily share the software that powers machine learning. “MLCube is a set of common conventions for creating ML software that can just ‘plug-and-play’ on many different systems,” he said.

Link to MLCommons release, https://mlcommons.org/en/news/mlperf-inference-v11/

Link to Nvidia blog on MLPerf results, https://blogs.nvidia.com/blog/2021/09/22/mlperf-ai-inference-arm/

Link to Intel blog on MLPerf results, https://www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-mlperf-inference-performance.html

Link to Qualcomm blog on MLPerf results: https://www.qualcomm.com/news/onq/2021/09/22/qualcomm-cloud-ai-100-emerges-fastest-ai-inference-solution-world

 

EnterpriseAI