Advanced Computing in the Age of AI|Tuesday, September 29, 2020
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

Nvidia Unveils AI Supercomputer, Launches A100 PCIe Cards 

Nvidia unveiled its Selene AI supercomputer today in tandem with the updated listing of world’s fastest computers. Nvidia also introduced the PCIe form factor of the Ampere-based A100 GPU.

Nvidia’s new internal AI supercomputer, Selene, joins the upper echelon of the 55th Top500’s ranks and breaks an energy-efficiency barrier. With 27.5 double-precision Linpack petaflops, Selene landed the number seventh spot on the latest Top500 list released today as part of the ISC 2020 Digital proceedings. Selene is the second most-performant industry system on the list, coming in one spot below Eni’s HPC5 machine, which was sixth with 35.5 HPL petaflops (and also uses Nvidia GPUs).

This Top500 list marks the entrance of two industry systems into the top ten, with Selene being the first internal IT vendor system to do so. Nvidia uses supercomputers internally to support chip design and model development, as well as for its work in robotics, self-driving cars, healthcare and other research projects.

Located in Santa Clara, Calif., Selene is a DGX SuperPOD, powered by Nvidia’s A100 GPUs and AMD’s Epyc Rome CPUs within the DGX A100 form factor, clustered over Mellanox HDR InfiniBand. Altogether, Selene comprises 280 DGX A100s, housing a total 2,240 A100 GPUs and 494 Mellanox Quantum 200G InfiniBand switches, providing 56 TB/s network fabric. The system includes 7 petabytes of all-flash network storage.

Selene was built with vertical integration of the network and the GPUs, using SHARP, said Gilad Shainer, senior vice president of marketing, who came to Nvidia via the Mellanox acquisition. “SHARP is the engine on the network that does the data reduction, which is a critical part in both traditional HPC simulations and deep learning,” he said in a pre-briefing held for media.

On the heels of Nvidia’s Ampere launch, Selene was constructed and up and running in less than a month, the company said.

Nvidia also runs internal workloads on three other machines that have made it into the Top500 ranking. There’s the V100-based DGX Superpod machine, which came in 24th on the latest Top500 with 9.4 Linpack petaflops; the P100-based DGX Saturn-V, deployed in 2016 that’s currently in 78th place with 3.3 petaflops; and Circe, another V100-based Superpod that’s grabbed the 91st rung with 3.1 Linpack petaflops.

Reached for comment, Karl Freund, senior analyst HPC and deep learning with Moor Insights and Strategy, underscored just how integral this in-house supercomputing power is to Nvidia’s competitive position. “First with Saturn five and now with Selene, Nvidia’s using their own technology to create better products, hardware and software, and that’s going to create a tough bar for somebody to clear competitively,” he told HPCwire. “You can’t imagine a startup spending tens of millions of dollars to develop a supercomputer that their engineers can use to develop their next chip. The use of AI, especially deep learning and reinforcement learning networks to do back-end physical design, is shown to create massive innovation.”

Nvidia’s newest AI supercomputer, Selene, accomplished a second-place finish on the Green500 list, delivering 20.52 gigaflops-per-watt, becoming one of only two machines to break the 20 gigaflops-per-watt barrier. The top-ranked green machine is MN-3, made by Top500 newcomer Preferred Networks. MN-3 turned in a record 21.1 gigaflops-per-watt run, a 1.62 petaflops Linpack score, and a 394th finish on the Top500.

Nvidia GPUs power six out of the ten most energy-efficient machines on the Top500 and fifteen out of the top 20.

Nvidia is also expanding its Ampere portfolio with a new PCIe A100 GPU card. When Nvidia launched its Ampere architecture the only way to obtain the A100 GPUs was to purchase Nvidia’s DGX A100 systems (available in four- and eight-GPU configurations) or the HGX A100 building blocks, leveraged by partnering cloud service providers and server makers. Now the datacenter company is announcing that PCIe-based systems will be forthcoming from server partners, in configurations spanning between one GPU to ten or more GPUs.

The SXM variant with NVLink is still only available as part of the HGX platform, which owning to the NVLink connectivity provide 10 times the bandwidth of PCIe Gen4, according to Nvidia.

Nvidia sold its prior generation V100 GPUs in both the SXM form factor and the PCIe form factor. SXMs were not restricted to an HGX board sale, which enabled system makers to essentially build their own DGX clones that potentially undercut Nvidia’s sales. Now Nvidia is tightening up its sales strategy, so that OEM partners that want to provide servers based on the more performant NVLink-equipped SXM parts must build their A100-based solutions using Nvidia’s four- or eight-way HGX boards.

“It’s kind of a bifurcated model by channel; direct channel customers can and will buy the DGX, and everybody else buys through OEMs,” said Freund. “It’s a pretty clean model. The OEMs are on notice that they gotta move fast or Nvidia will take up all of this as a system vendor, right? But Nvidia doesn’t really want to have a sales channel broad enough to do that exclusively. So they still need the OEMs.”

The PCIe form factor matches SXM on peak performance: 9.7 teraflops FP64 performance (up to 19.5 teraflops FP64 tensor core performance), and 19.6 teraflops FP32 performance (up to 312 teraflops tensor float 32 [with structural sparsity enabled]). However at 250 watts compared with the SXM’s 400 watts, the PCIe A100 is designed to run at a lower TDP. This means that while the peak performance is the same, sustained performance is impacted. On real applications the A100 PCIe GPU provides about 90 percent of delivered performance of A100 SXM when running on a single GPU, Nvidia said. But when scaling up where applications run on four-, eight- or more GPUs, the SXM configuration inside the HGX provides up to 50 percent higher performance on account of the NVLink connections, according to Nvidia.

Nvidia says the PCIe configuration is well suited for mainstream accelerated servers that go into the standard racks that offer lower power per server. “While the PCIe are intended for AI inference and some HPC applications that scale across one or two GPUs, the A100 SXM configuration is ideal for customers with applications scaling to multiple GPUs in a server, as well as across servers,” said Paresh Kharya, director of product management, accelerated computing at Nvidia.

Nvidia benchmarking results*

As Nvidia ramps its go to market for A100, the company is anticipating an expanded ecosystem of A100-powered servers. It expects 30 systems this summer with over 20 more coming by the end of the year. Systems are expected to be forthcoming from a wide range of system manufacturers, including ASUS, Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, One Stop Systems, Quanta/QCT and Supermicro. Nvidia also reported that it is building out its portfolio of NGC-Ready certified systems.

* 1 BERT pre-training throughput using Pytorch, including (2/3) Phase 1 and (1/3) Phase 2 | Phase 1 Seq Len = 128, Phase 2 Seq Len = 512 | V100: NVIDIA DGX-1™ server with 8x NVIDIA V100 Tensor Core GPU using FP32 precision | A100: NVIDIA DGX A100 server with 8x A100 using TF32 precision.
2 BERT large inference | NVIDIA T4 Tensor Core GPU: NVIDIA TensorRT™ (TRT) 7.1, precision = INT8, batch size 256 | V100: TRT 7.1, precision FP16, batch size 256 | A100 with 7 MIG instances of 1g.5gb; pre-production TRT, batch size 94, precision INT8 with sparsity.
3 V100 used is single V100 SXM2. A100 used is single A100 SXM4. AMBER based on PME-Cellulose, LAMMPS with Atomic Fluid LJ-2.5, FUN3D with dpw, Chroma with szscl21_24_128.

About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This