Advanced Computing in the Age of AI | Thursday, March 28, 2024

Wave Computing Unveils Licensable 64-Bit AI IP Platform to Enable High-Speed Inferencing and Training in Edge Applications 

CAMPBELL, Calif., April 12, 2019 -- Wave Computing, the Silicon Valley company accelerating artificial intelligence (AI) from the datacenter to the edge, today announced its new TritonAI 64 platform, which integrates a triad of powerful technologies into a single, future-proof intellectual property (IP) licensable solution. Wave’s TritonAI 64 platform delivers 8-to-32-bit integer-based support for high-performance AI inferencing at the edge now, with bfloat16 and 32-bit floating point-based support for edge training in the future.

Wave’s TritonAI 64 platform is an industry-first solution, enabling customers the ability to address a broad range of AI use cases with a single platform. The platform delivers efficient edge inferencing and training performance to support today’s AI algorithms, while providing customers with flexibility to future-proof their investment for emerging AI algorithms. Features of the TritonAI 64 platform include a leading-edge MIPS® 64-bit SIMD engine that is integrated with Wave’s unique approach to dataflow and tensor-based configurable technology. Additional features include access to Wave’s MIPS integrated developer environment (IDE), as well as a Linux-based TensorFlow programming environment.

The global market for AI products is projected to dramatically increase to over $170B by 2025, according to technology analyst firm Tractica. The total addressable market (TAM) for AI at the edge comprises over $100B of this market and is being driven primarily by the needs for more efficient inferencing, and new AI workloads and use cases, as well as the need for training at the edge.

“Wave Computing is achieving another industry first by delivering a licensable IP platform that enables both AI inferencing and training at the edge,” said Derek Meyer, Chief Executive Officer of Wave Computing. “The tremendous growth of edge-based AI use cases is exacerbating the challenges of SoC designers who continue to struggle with legacy IP products that were not designed for efficient AI processing. Our TritonAI solution provides them with the investment protection of a programmable platform that can scale to support the AI applications of both today and tomorrow.  TritonAI 64 enhances our overall AI offerings that span datacenter to edge and is another company milestone enabled by our acquisition of MIPS last year.”

Details of Wave’s TritonAI 64 Platform:

  • MIPS 64-bit + SIMD Technology: Offering an open instruction set architecture (MIPS Open), coupled with a mature integrated development environment (IDE), provides an ideal software platform for developing AI applications, stacks and use cases. The MIPS IP subsystem in the TritonAI 64 platform enables SoCs to be configured with up to six MIPS 64 CPUs, each with up to four hardware-threads. The MIPS subsystem hosts the execution of Google’s TensorFlow framework on a debian-based Linux operating system, enabling the development of both inferencing and edge learning applications. Additional AI frameworks such as Caffe2, can be ported to the MIPS subsystem, as well as support a wide variety of AI networks using ONNX conversion.
  • WaveTensor Technology: The WaveTensor subsystem can scale up to a PetaTOP of 8-bit integer operations on a single core instantiation by combining extensible slices of 4x4 or 8x8 kernel matrix multiplier engines for the highly efficient execution of today’s key Convolutional Neural Network (CNN) algorithms. The CNN execution performance can scale up to 8 TOPS/watt and over 10 TOPS/mm2 in industry standard 7nm process nodes with libraries using typical voltage and processes.
  • WaveFlow Technology: Wave Computing’s highly flexible, linearly scalable fabric is adaptable for any number of complex AI algorithms, as well as conventional signal processing and vision algorithms. The WaveFlow subsystem features low latency, single batch size AI network execution and reconfigurability to address concurrent AI network execution. This patented WaveFlow architecture also supports algorithm execution without intervention or support from the MIPS subsystem.

Additional information about Wave Computing’s new TritonAI 64 Platform, in addition to details on Wave’s complete portfolio of IP solutions, can be found at https://wavecomp.ai.

About Wave Computing

Wave Computing, Inc. is revolutionizing artificial intelligence (AI) with its dataflow-based systems and solutions that deliver orders of magnitude performance improvements over legacy architectures. The company’s vision is to bring deep learning to customers’ data wherever it may be—from the datacenter to the edge—helping accelerate time-to-insight. Wave is powering the next generation of AI by combining its dataflow architecture with its MIPS embedded RISC multithreaded CPU cores and IP. Wave Computing was named Frost & Sullivan’s 2018 “Machine Learning Industry Technology Innovation Leader” and is recognized by CIO Applications magazine as one of the “Top 25 Artificial Intelligence Providers.”  Wave now has over 400 granted and pending patents and hundreds of customers worldwide. More information about Wave Computing can be found at https://wavecomp.ai.


Source: Wave Computing

EnterpriseAI