Graphcore IPU Gets a Public Cloud Boost
Graphcore, the U.K. AI chip developer, is expanding its roster of cloud partners to include Cirrascale Cloud Partners, a deep learning infrastructure specialist.
The result of the collaboration is a scalable AI cloud platform dubbed Graphcloud that provides access to Graphcore’s second-generation intelligent processing unit, or IPU, and accompanying software stack.
San Diego-based Cirrascale provides deep learning infrastructure for applications ranging from natural language processing (NLP) and computer vision workflows to autonomous vehicles. The partners said the machine learning platform based on Graphcore’s MK2 IPU-POD cluster is aimed at customers seeking to scale deep learning applications from pilot projects to production workloads.
Graphcore’s IPU runs on a software stack dubbed Poplar. Poplar is based on the TensorFlow open source library and the Open Neural Network Exchange. That approach is intended to help developers using existing machine learning tools and models.
Graphcore released its second-generation IPU-POD this past July. The M2000 compute blade delivers petaflop-scale performance via four Colossus Mk2 GC200 IPU processors – each containing 1,472 separate IPU cores. Along with an IPU gateway SoC, the platform is based on an Arm Cortex-A quad core SoC. A low-latency “IPU-Fabric” interconnect tops out at 2.8 Tbps.
The partners said Graphcloud also can be integrated with other cloud-based workflows. For example, Microsoft Azure added instances of Graphcore’s first-generation IPU in 2019 following several years of development aimed at enhancing NLP and machine vision models. This week’s announcement marks the first public cloud instance of the Mk2 platform.
Weekly pricing for a 16-cluster IPU-POD is $5,000. The 64-cluster version is priced at $20,000 a week, Graphcore said.
Industry analysts praised the cloud AI collaboration.
“Cirrascale has always been an innovator with new technologies, while Graphcore is one of the brightest stars in the AI startup world, so the combination makes a lot of sense,” said Karl Freund, senior analyst for HPC and machine learning at Moor Insights & Strategy.
“The new second-generation Graphcore IPU has excellent scalability and a large on-die memory store, both critical for handling the new class of massive models such as Transformers for natural language processing,” Freund added.
Transformer uses sequential data for applications such as speech translation and text summaries.
Cirrascale has carved out a niche as a cloud provider for HPC applications. The collaboration with Graphcore adds the company's roster of AI cloud services. In 2019, for example, it added Nvidia’s DGX-1 deep learning platform to its GPU cloud services offerings. DGX-1 integrates eight V100 Tensor Core GPU data centers accelerators.
--Editor's note: This story has been updated.