Advanced Computing in the Age of AI | Friday, April 26, 2024

Cost-effective Fork of GPT-3 Released to Scientists 

Researchers looking to create a foundation for a ChatGPT-style application now have an affordable way to do so.

Cerebras is releasing open-source learning models for researchers with the ingredients necessary to cook up their own ChatGPT-AI applications.

The open-source tools include seven models that form a learning architecture in which researchers can feed their data, train a system, and then can generate results.

The models are based on the GPT-3 large language model, which is the basis for OpenAI’s ChatGPT chatbot, and has up to 13 billion parameters.

“You need a model, and you need data. And you need expertise. And you need computer hardware,” said Andrew Feldman, CEO of Cerebras Systems.

The tools are an effort to give companies an affordable tool to build large language models. For scientific researchers with limited size data sets specific to domains, a model with up to 13 billion parameters would be enough.

The optimized Cerebras-GPT model is not as large as open-source GPT-3, which has 175 billion parameters. The latest OpenAI model, GPT-4, which is closed source and powers Microsoft’s Bing with AI, has significantly more parameters.

Cerebras’ model is more cost-effective because GPT-3 would be an overkill because of its sheer size and hardware required.

OpenAI’s latest GPT-4 – which is for significantly larger data sets – would be expensive at hundreds of thousands of dollars per month for a license, said Karl Freund, principal analyst at Cambrian AI Research.

“There are other models that are available at lower costs that are small, but they are all going to cost you money. This does not cost you anything,” Freund said, adding “whether it’s astronomy or biochemistry or whatever, you want to build a model that is affordable.”

Cerebras is a hardware company primarily known for its AI chips. But the emergence of ChatGPT gave a new lease of life to many AI chipmakers, and software can be an effective showcase of the hardware capabilities, Freund said.

“We’ve trained these models and are making every aspect of that training available to the open-source community and you can run it on CPUs, GPUs and TPUs,” Feldman said, adding that the models are available on ModelZoo, HuggingFace, or through the company’s GitHub repository.

The generative AI landscape is still in its infancy, but trends are emerging. OpenAI’s GPT-3 was open-sourced, but GPT-4 – which is being used by Microsoft for Bing’s AI capabilities – is closed source.

Beyond Microsoft, Google has Lambda and PaLM, while Meta has LLaMa, which is only open for research. By comparison, Cerebras’ model can be used for commercial applications.

“What we have is a situation where increasingly a handful of companies hold the keys and they’re big and it’s not getting more open over time. It is getting more closed,” Feldman said.

Meta’s LLaMa foundation model has up to 65 billion parameters, while Google’s PaLM has 540 billion, but those models are much more expensive considering the hardware requirements. The cost per transaction and response time has become a bigger part of the conversation for Microsoft and Google, which are now competing on AI search.

Customers are increasingly concerned about LLM inference costs. Historically, more capable models required more parameters, which meant larger and more expensive inference computing was necessary. Cerebras’ goal is to make sure implementations are competitive on cost and performance.

“In this release we train models with 20 tokens per parameter, which is compute-optimal, but we are already working with customers to train models with far more tokens per parameter – meaning higher quality model at lower inference costs,” said Richard Kuzma, senior product manager of natural language processing at Cerebras.

Customers are not locked into the pricing of cloud providers, and own their model weights after training with Cerebras. They can take advantage of newly developed inference optimizations and are not locked into relying on an API that isn’t under their control.

Cerebras’ Andromeda AI supercomputer. Credit: Cerebras.

The models were trained on Cerebras’ Andromeda AI supercomputer, which patches together 16 CS-2 systems and a total of 13.5 million AI computing cores.

The Andromeda system delivers in excess of 1 exaflop of AI performance. An x86-based system comprising 284 Epyc 7713 “Milan” CPUs handles preprocessing before the applications go to the  Cerebras’ wafer-sized WSE-2 chips, which have 2.6 trillion transistors.

The WSE-2 chip was announced in April 2021. Cerebras will announce hardware updates in the coming quarter, Feldman said.

Generative AI was a big topic of discussion at last week’s GPU Technology Conference held by Nvidia. ChatGPT has showed the ability to write code, but can also make up stuff, which could be entertaining, but that is not good for science, said Kathleen Fisher, director of the information innovation office at the U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA), during a break-out session at GTC.

“There’s a growing sense that it is more likely to hallucinate in parts of the data that are less well covered,” Fisher said.

But that is where things like checkpoints – which is in Cerebras’ models – come in.

“We’re going to see massive movements in addressing that particular issue, based on the science underneath the large language models, large pre-trained models, how they’re based on just statistical predictions of what’s coming next,” Fisher said.

It is healthy practice to distrust AI systems, she said.

“It seems like that capability fundamentally will continue to have a hallucinatory problem, but you could build belts and suspenders or other things on top that might catch the problems and drive the occurrence rate down,” Fisher said.

EnterpriseAI