Advanced Computing in the Age of AI | Wednesday, June 29, 2022

AI-Ready 2nd Generation Intel® Xeon® Platinum 9200 Processors Demonstrate Leadership Performance 
Sponsored Content by Intel

Built for HPC workloads including AI, the latest processors from Intel® speed the most data-intensive workloads in science and industry

Traditional high performance computing (HPC) workloads like simulation and modeling, or high performance data analytics, remain highly relevant in research and industry. However, as converged workloads involving AI gain adoption, HPC systems need to keep pace. 2nd Generation Intel® Xeon® Platinum processors, with built-in AI acceleration technologies, rise to this challenge with leadership performance to handle the most demanding HPC workloads.


The Intel Xeon Platinum 9200 processors incorporate two Intel Xeon dies into a single package, supported by 12 DDR4 memory channels. Various SKUs offer customers a choice of 32 to 56 cores per processor. For real-world workloads commonly used in science and manufacturing, the 56-core Intel Xeon Platinum 9282 processors deliver an average of 31% higher performance than a 64-core AMD Rome-based system (7742).1 For more detailed benchmarking data comparing the processors, please visit our website.

Compared with the Intel Xeon Platinum 8180 processors, the newest Intel Xeon Platinum 9282 processors offer a 2X average performance improvement2.  In AI inference scenarios, Intel Xeon Platinum 9282 CPU with Intel DL Boost offers up to a 30X increase in performance over the Intel Xeon Platinum 8180 processors3.

Of course, processors represent only one element of overall HPC system performance characteristics. Working in tandem with most 2nd Generation Intel Xeon Scalable processors, other ingredients in the Intel portfolio like Intel Optane™ DC persistent memory and Intel Optane DC SSDs augment the CPUs help accelerate mission-critical endeavors too. Intel Optane DC persistent memory combines non-volatility and high capacity to support use cases involving large data sets. Also, Intel Optane DC SSDs feature low latency and high bandwidth for an HPC workload’s data ingestion and inference stages.

Powering modern science and industry

“Many organizations today use their HPC systems for multiple workload types, and that variability places different demands on their systems,” said Harris Joyce, Director of HPC Marketing at Intel Corporation. “For this reason, customers tell us they need flexible, adaptable, and future-ready systems. The latest Intel Xeon processors with built-in AI acceleration and complemented with Intel Optane DC Persistent Memory technology offer a comprehensive solution which helps companies and research institutions prepare for the new possibilities which AI enables by accelerating the convergence of HPC and AI.”

Broad ecosystem for Intel solutions

A vast ecosystem of OEMs, systems integrators, solution providers, and software developers back Intel solutions. OEMs including Atos, Cray/Hewlett Packard Enterprise, Lenovo, Inspur, Sugon, H3C, and Penguin currently offer Intel Xeon Platinum 9200 processor-based solutions.

Advania, located in Iceland, is the first solutions provider to offer HPC-in-the-cloud instances based on the Intel Xeon Platinum 9200 processors. Because the Advania cloud-based HPC instances offer their clients performance levels approaching that provided by on-premise systems, customers do not need to sacrifice speed for versatility.

For customers seeking a turnkey HPC system based on Intel Xeon Platinum 9200 processors, the Intel® Server System S9200WK product family for Intel Data Center Blocks (Intel DCB) offers a stellar option. The Intel DCB, available through OEMs, offer pre-validated solutions featuring Intel’s latest data center technologies. Since the unbranded server systems are ready for rapid deployment, OEMs and their customers can accelerate time to market with proven and performant HPC solutions.

2019 Supercomputing Conference

In Intel’s booth #1301 at SC19, attendees can witness the head-to-head performance and feature comparisons, which demonstrate the mettle of Xeon Platinum 9200 processors.

Find out more

Learn how Intel Xeon Platinum 9200 processors will benefit your organization.

  1. For configuration details, visit (Intel Xeon Scalable processors – claim #31). For additional detail visit
  2. 2x Average Performance Improvement compared with Intel® Xeon® Platinum 8180 processor. Geomean of est SPECrate2017_int_base, est SPECrate2017_fp_base, Stream Triad, Intel® Distribution of Linpack, server side Java. Platinum 92xx vs Platinum 8180: 1-node, 2x Intel® Xeon® Platinum 9282 cpu on Walker Pass with 768 GB (24x 32GB 2933) total memory, ucode 0x400000A on RHEL7.6, 3.10.0-957.el7.x86_65, IC19u1, AVX512, HT on all (off Stream, Linpack), Turbo on all (off Stream, Linpack), result: est int throughput=635, est fp throughput=526, Stream Triad=407, Linpack=6411, server side java=332913, test by Intel on 2/16/2019. vs. 1-node, 2x Intel® Xeon® Platinum 8180 cpu on Wolf Pass with 384 GB (12 X 32GB 2666) total memory, ucode 0x200004D on RHEL7.6, 3.10.0-957.el7.x86_65, IC19u1, AVX512, HT on all (off Stream, Linpack), Turbo on all (off Stream, Linpack), result: est int throughput=307, est fp throughput=251, Stream Triad=204, Linpack=3238, server side java=165724, test by Intel on 1/29/2019.
  3. Up to 30X AI performance with Intel® Deep Learning Boost (Intel DL Boost) compared to Intel® Xeon® Platinum 8180 processor (July 2017). Tested by Intel as of 2/26/2019. Platform: Dragon rock 2 socket Intel® Xeon® Platinum 9282(56 cores per socket), HT ON, turbo ON, Total Memory 768 GB (24 slots/ 32 GB/ 2933 MHz), BIOS: SE5C620.86B.0D.01.0241.112020180249, Centos* 7 Kernel 3.10.0-957.5.1.el7. x86_64, Deep Learning Framework: Intel® Optimization for Caffe* version: d554cbf1, ICC 2019.2.187, MKL DNN version: v0.17 (commit hash: 830a10059a018cd-2634d94195140cf2d8790a75a), model:, BS=64, No datalayer DummyData: 3x224x224, 56 instance/2 socket, Datatype: INT8 vs Tested by Intel as of July 11th 2017: 2S Intel® Xeon® Platinum 8180 cpu @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS* Linux release 7.3.1611 (Core), Linux kernel* 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (, revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from (ResNet-50),. Intel C++ compiler ver. 17.0.2 20170213, Intel® Math Kernel Library (Intel® MKL) small libraries version 2018.0.20170425. Caffe run with “numactl -l“.


For more complete information about performance and benchmark results, visit

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available security updates. No product or component can be absolutely secure.

Refer to for more information regarding performance and optimization choices in Intel software products.

Intel Advanced Vector Extensions (Intel AVX) provides higher throughput to certain processor operations. Due to varying processor power characteristics, utilizing AVX instructions may cause a) some parts to operate at less than the rated frequency and b) some parts with Intel® Turbo Boost Technology 2.0 to not achieve any or maximum turbo frequencies. Performance varies depending on hardware, software, and system configuration and you can learn more at

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.


Add a Comment