Advanced Computing in the Age of AI | Monday, June 24, 2024

Cerebras and Neural Magic Collaborate to Accelerate Sparse Training and Deployment of LLMs 

SUNNYVALE, Calif. and CAMBRIDGE, Mass., May 17, 2024 -- Cerebras Systems, a pioneer in accelerating generative AI, and Neural Magic, a leader in high-performance enterprise inference servers, announced the results of their collaboration for sparse training and deployment of large language models (LLMs).

Achieving an unprecedented 70% parameter reduction with full accuracy recovery, training on Cerebras CS-3 systems and deploying on Neural Magic inference server solutions enables significantly faster, more efficient, and lower cost LLMs, making them accessible to a broader range of organizations and industries.

“For the first time ever, we achieved up to 70% sparsity for a foundational model, such as Llama, with full accuracy recovery for challenging downstream tasks,” said Sean Lie, CTO and co-founder of Cerebras. “This breakthrough enables scalable training and accelerated inference – our CS-3 system provides near theoretical acceleration for training sparse LLMs, and Neural Magic’s inference server, DeepSparse, delivers up to 8.6x faster inference than dense, baseline models.”

With native hardware support for unstructured sparsity, the Cerebras CS-3 system accelerates training for 70% and higher sparse models – far exceeding the yet unrealized peak on GPUs like H100 and B100. This is because GPU sparsity is limited and rigid – with only 50% support using a fixed ratio. With the CS-3 system, purpose-built for sparse models with the industry's highest memory bandwidth, AI practitioners can employ novel techniques from Neural Magic, such as sparse pretraining and sparse fine-tuning to their datasets, to create highly sparse LLMs without sacrificing accuracy. The results are faster, smaller models which retain the full accuracy of their slower, dense counterparts.

“Together with Cerebras and their purpose-built AI hardware, we created sparse, foundational models that deliver lightning-fast inference through our sparsity-aware software platform,” said Mark Kurtz, CTO of Neural Magic. “This paradigm shift provides enterprises and researchers alike with much more efficient, cost-effective, and accessible deployment of LLMs across a wide range of industries and real-world applications.”

To facilitate the adoption and further development of sparse LLMs, Cerebras and Neural Magic have released the models, recipes, implementations, and documentation of this sparsity breakthrough. For more information, please visit here.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise.

About Neural Magic

Neural Magic accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As a software-delivered solution, Neural Magic optimizes open-source models, like large language models, to run efficiently on commodity hardware. Organizations can spend less to advance AI initiatives to production, without sacrificing performance and accuracy with their models. Founded by a MIT professor and an AI research scientist, challenged by the constraints of existing hardware, Neural Magic enables a future where developers and IT can tap into the power of state-of-the-art, open-source AI with none of the friction.

Source: Cerebras Systems