Advanced Computing in the Age of AI | Tuesday, September 26, 2023

HPE Offers LLM Option for Supercomputing in the Cloud 

The high-performance computing options are thinning, and HPE is one of the last U.S. companies standing to build supercomputers – a fact indirectly acknowledged by Dan Reed, a professor at the University of Utah, during a keynote at the ISC conference last month.

Now, HPE has a new computing option for running HPC-focused machine learning applications without building and managing on-premise supercomputers: the company is expanding its GreenLake cloud service to include high-performance computing options for large language models.

HPE GreenLake for Large Language Models in the cloud “allows single large-scale AI and HPC jobs to run on hundreds or thousands of CPUs or GPUs at once, which is very, very different than general purpose cloud offerings that run multiple jobs in parallel on a single instance,” said Justin Hotard, executive vice president, and general manager for HPC labs and AI, during a press briefing.

Supercomputers require unique datacenter capabilities for power and cooling, and “until now, supercomputers have not been available on demand in a consumption model,” Hotard said.

The supercomputing option in the cloud is part of a larger announcement of HPE entering the cloud market for AI. HPE GreenLake for Large Language Models provides access to a supercomputer, software stack and services for customers to remotely run machine-learning applications. GreenLake packages computing as a utility service – similar to how electric companies charge for monthly usage. In this case, HPE charges for the AI computing consumed in the cloud and provides the ability to customize software and hardware connections.

HPE is providing access specifically to its Cray XD supercomputers, and the “initial large language models are on Nvidia H100 GPUs,” Hotard said, adding “we will provide further details as we launch additional specific types of instances.” The company announced XD2000 and XD6500 supercomputers for enterprise and AI applications last year. The supercomputers use some of the Cray technologies that are also in exascale systems. The XD supercomputers have the SlingShot interconnect and ClusterStor E1000 storage system.

An ongoing debate in the supercomputing community revolves around the relative safety of air-gapped supercomputing systems, and the risks and performance issues in moving HPC workloads to the cloud, which is happening gradually.  On LLMs, the cloud can be a bottleneck, as it does not provide the throughput or bandwidth of on-prem supercomputers. HPE is studying how the on-premise and cloud models for AI could complement each other.

“We’ve been doing a small development cloud testbed for a number of months now and we’ve got a lot of positive feedback on requirements. I think they have got something that we believe will be addressed,” Hotard said, adding that HPE GreenLake for LLMs in the cloud is a complementary offer to its on-prem supercomputers. Customers that run on-premise AI workloads look for bursting capabilities, and the cloud option can be added to the cluster. The cloud can free up on-prem resources to run more critical AI workloads, and it also adds diversity to the HPC software and computing stack, Hotard said.

Services & software

HPE has a stack of hardware and software services at both levels that will complement each other.  The software stack has features for LLMs to be trustworthy and accurate. A machine-learning data management software allows data to be visible at all times, and integrated, tracked, and audited. These capabilities provide guardrails to generate safe and reliable data, which has been a big concern with LLMs hallucinating and generating unstable responses.

“Our supercomputers leverage the HPE Cray programming environment, which offers developers tools to create or debug and tune code and optimize their applications,” Hotard said.

Users will have access to Luminous, a large-language model from Aleph Alpha that has 13 billion parameters. The LLM is multimodal, meaning it can process images and text. Top AI companies such as Google are building multi-modal LLM that supports all types of input, including voice. “This is the first service that we’re announcing on HPE GreenLake, and we anticipate releasing services in other areas such as climate modeling, drug discovery, financial services, manufacturing, and transportation,” Hotard said. Luminous fits into HPE’s plans to offer foundational models that support HPC workloads, as opposed to generic workloads that are used for general-purpose computing. HPE will initially offer open-source models as well as proprietary models that are available for purchase.

“We’re already in deep discussions on the pharmaceutical side as well, for example. Those are going to vary based on the use cases or what the best partners are to work with,” said Evan Sparks, chief product officer for Artificial Intelligence at HPE, in response to a question from HPCwire.

HPE currently has no plans to offer OpenAI’s GPT-4 large language model to supercomputing customers, but “that could evolve as partnership discussions evolve,” Sparks said.

The HPE GreenLake for LLMs will first be publicly available in North America, and then it will be available in Europe towards the end of the year or early next year.

The competition

To be sure, public cloud providers also offer supercomputing options via virtual machine instances. Google last month announced the A3 supercomputer, which has 26,000 Nvidia H100 GPUs. Amazon also offers its own suite of HPC offerings, including EC2 instances with its high-speed Elastic Fabric Adapter interconnect, and the Lustre or ZFS file systems.  Amazon is also talking about confidential computing in its HPC offerings to keep data secure while data is being processed in the VMs.

Customers have multiple options for AI and supercomputing in the cloud, but it will come down to whether organizations want to use HPE’s service, which is a hybrid system and less exposed to the public, or public cloud providers, which have security mechanisms in place to protect data.

But HPE said the GreenLake for LLMs is complementary to public cloud services, where customers may store structured and unstructured data that is used to train machine learning models.

“Data is a critical input to training and tuning these kinds of models. There needs to be a [mechanism] to get data from the public cloud, or wherever the data resides, into the service. Those mechanisms will be made available,” Sparks said. GreenLake customers can also make API calls, which is a popular way to connect AI applications to well-known LLMs such as GPT-4.