Advanced Computing in the Age of AI | Friday, March 29, 2024

IBM Begins Power9 Rollout with Backing from DOE, Google 

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit and Sierra. The new AC922 server pairs two Power9 CPUs with four or six Nvidia Tesla V100 NVLink GPUs. IBM is positioning the Power9 architecture as “a game-changing powerhouse for AI and cognitive workloads.”

The AC922 extends many of the design elements introduced in Power8 “Minsky” boxes with a focus on enabling connectivity to a range of accelerators – Nvidia GPUs, ASICs, FPGAs, and PCIe-connected devices — using an array of interfaces. In addition to being the first servers to incorporate PCIe Gen4, the new systems support the NVLink 2.0 and OpenCAPI protocols, which offer nearly 10x the maximum bandwidth of PCI-E 3.0 based x86 systems, according to IBM.

IBM AC922 rendering

“We designed Power9 with the notion that it will work as a peer computer or a peer processor to other processors,” said Sumit Gupta, vice president of of AI and HPC within IBM’s Cognitive Systems business unit, ahead of the launch. “Whether it’s GPU accelerators or FPGAs or other accelerators that are in the market, our aim was to provide the links and the hooks to give all these accelerators equal footing in the server.”

In the coming months and years there will be additional Power9-based servers to follow from IBM and its ecosystem partners, but this launch is all about the flagship AC922 platform and specifically its benefits to AI and cognitive computing – something Ken King, general manager of OpenPOWER for IBM Systems Group, shared with HPCwire when we sat down with him at SC17 in Denver.

“We didn’t build this system just for doing traditional HPC workloads,” King said. “When you look at what Power9 has with NVLink 2.0 we’re going from 80 gigabits per second throughput [in NVLink 1.0] to over 150 gigabits per second throughput. PCIe Gen3 only has 16. That GPU to CPU I/O is critical for a lot of the deep learning and machine learning workloads.”

Coherency, which Power9 introduces via both CAPI and NVLink 2.0, is another key enabler. As AI models grow large, they can easily outgrow GPU memory capacity, but the AC922 addresses these concerns by allowing accelerated applications to leverage system memory as GPU memory. This reduces latency and simplifies programming by eliminating data movement and locality requirements.

The AC922 server can be configured with either four or six Nvidia Volta V100 GPUs. According to IBM, a four GPU air-cooled version will be available December 22 and both four- and six-GPU water-cooled options are expected to follow in the second quarter of 2018.

While the new Power9 boxes have gone by a couple different codenames (“Witherspoon” and “Newell”), we’ve also heard folks at IBM refer to them informally as their “Summit servers” and indeed there is great visibility in being the manufacturer for what is widely expected to be the United States’ next fastest supercomputer. Thousands of the AC922 nodes are being connected together along with storage and networking to drive approximately 200 petaflops at Oak Ridge and 120 petaflops at Lawrence Livermore.

As King pointed out in reference to the delayed and retooled Argonne “Aurora” system, only one of the original CORAL contractors is fulfilling its mission to deliver “pre-exascale” supercomputing capability to the collaboration of US labs.

IBM has also been tapped by Google, which with partner Rackspace is building a server with Power9 processors called Zaius. In a prepared statement, Bart Sano, vice president of Google Platforms, praised “IBM’s progress in the development of the latest POWER technology” and said “the POWER9 OpenCAPI Bus and large memory capabilities allow for further opportunities for innovation in Google data centers.”

IBM sees the hyperscale market as “a good volume opportunity” but is obviously aware of the impact that volume pricing has had on the traditional server market. “We do see strong pull from them, but we have many other elements in play,” said Gupta. “We have solutions that go after the very fast-growing AI space, we have solutions that go after the open source databases, the NoSQL datacenters. We have announced a partnership with Nutanix to go after the hyperconverged space. So if you look at it, we have lots of different elements that drive the volume and opportunity around our Linux on Power servers, including of course SAP HANA.”

IBM will also be selling Power9 chips through its OpenPower ecosystem, which now encompasses 300 members. IBM says it’s committed to deploying three versions of the Power9 chip, one this year, one in 2018 and another in 2019. The scale-out variant is the one it is delivering with CORAL and with the AC922 server. “Then there will be a scale-up processor, which is the traditional chip targeted towards the AIX and the high-end space and then there’s another one that will be more of an accelerated offering with enhanced memory and other features built into it; we’re working with other memory providers to do that,” said King.

He added that there might be another version developed outside of IBM, leveraging OpenPower, which gives other organizations the opportunity to utilize IBM’s intellectual property to build their own differentiated chips and servers.

King is confident that the demand for IBM’s latest platform is there. “I think we are going to see strong out-of-the-chute opportunities for Power9 in 2018. We’re hoping to see some growth this quarter with the solution that we’re bringing out with CORAL but that will be more around the ESP customers. Next year is when we’re expecting that pent up demand to start showing positive return overall for our business results.”

A lot is riding on the success of Power9 after Power8 failed to generate the kind of profits that IBM had hoped for. There was growth in Power8’s first year, said King, but after that sales tailed off. He added that capabilities like Nutanix and building PowerAI and other software based solutions on top of it have led to a bit of a rebound. “It’s still negative but it’s low negative,” he said, “but it’s sequentially grown quarter to quarter in the last three quarters, since Bob Picciano [SVP of IBM Cognitive Systems] came on.”

Several IBM reps we spoke with acknowledged that pricing – or at least pricing perception – was a problem for Power8.

“For our traditional market I think pricing was competitive; for some of the new markets that we’re trying to get into, like the hyperscaler datacenters, I think we’ve got some work to do,” said King. “It’s really a TCO and a price-performance competitiveness versus price only. And we think we’re going to have a much better price performance competitiveness with Power9 in the hyperscalers and some of the low-end Linux spaces that are really the new markets.”

“We know what we need to do for Power9 and we’re very confident with a lot of the workload capabilities that we’ve built on top of this architecture, that we’re going to see a lot more growth, positive growth, on Power9, with PowerAI with Nuta,nix with some of the other workloads we’ve put in there. And it’s not going to be a hardware-only reason,” King continued. “It’s going to be a lot of the software capabilities that we’ve built on top of the platform, and supporting more of the newer workloads that are out there. If you look at the IDC studies of the growth curve of cognitive infrastructure, it goes from about $1.6 billion to $4.5 billion over the next two or three years – it’s a huge hockey stick – and we have built and designed Power9 for that market, specifically and primarily for that market.”

About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.

EnterpriseAI