Advanced Computing in the Age of AI | Friday, March 29, 2024

MIPS-Based Platform Targets AI App Developers 

via Shutterstock

High-end computing platforms are emerging to support AI applications like inferencing and training so-called edge applications ranging from IoT networks to autonomous vehicles.

The latest entry comes from Wave Computing, which released a MIPS-based 64-bit AI platform this week. Available under license, the “TritonAI” programmable platform is aimed at AI chip designers targeting edge applications.

Wave Computing, which specializes in the data flow processing of neural networks, acquired MIPS Technologies last June.

The AI startup based in Campbell, Calif., said its new platform delivers up to 32 bits of processing power for AI inferencing at the edge and 32-bit floating point support for edge training. The idea is to address a range of AI use cases on a single platform, the company said Tuesday (April 16).

The IP platform combines the vendor’s proprietary “AI-native dataflow” approach with programming based on the TensorFlow machine learning library. That platform runs on a MIPS 64-bit single instruction, multiple data engine architecture. It also supports the MIPS developer environment based on the chip design’s open instruction set architecture.

Along with supporting edge inferencing and training that support current AI algorithms, Wave Computing promotes the Triton platform as capable of supporting emerging algorithms.

The AI platform configures SoCs with up to six MIPS 64 CPUs, each with up to four hardware threads, Wave said. The MIPS architecture executes the TensorFlow framework on a Linux OS. The resulting platform can be used to develop both AI inferencing and edge learning applications.

Other AI frameworks such as Caffe2 can be ported to the MIPS architecture, and additional AI networks are said to be supported on the TritonAI platform.

The vendor pitches its dataflow technology and IP as accelerating deep learning by approaching AI inferencing and training as “a dataflow application that demands a different type of processor.” Wave’s “data processor units” are touted as eliminating the need for a host and co-processor, thereby scaling for IoT edge and datacenter applications.

“The tremendous growth of edge-based AI use cases is exacerbating the challenges of SoC designers who continue to struggle with legacy IP products that were not designed for efficient AI processing,” said Wave Computing CEO Derek Meyer.

The 64-bit AI platform also includes a TensorFlow distribution that processes current Convolutional Neural Network algorithms, with scaling up to 8 TOPS/watt. The platform also incorporates a scalable fabric that can be adapted to emerging AI along with vision and signal processing algorithms.

The proprietary “WaveFlow” architecture also supports algorithm execution without relying on the underlying MIPS system.

The AI acceleration specialist completed an $86 million funding round in November 2018, bringing total investments to more than $200 million.

 

 

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI