Advanced Computing in the Age of AI | Monday, May 20, 2024

Cortical.io Demonstrates 2,800x Acceleration and 4,300x Increase in Energy Efficiency Over BERT 

VIENNA and SAN FRANCISCO, July 12, 2022 — Cortical.io today announced its breakthrough prototype for classifying high volumes of unstructured text. Classifying documents or messages constitutes one of the most fundamental Natural Language Understanding (NLU) functions for business artificial intelligence (AI). The benchmark was carried out on two similar system setups using the same, off-the-shelve, dual AMD-Epyc server hardware. The “BERT” system, a transformer-based machine learning technique for natural language processing, was augmented by a NVidia GPU. The “Semantic Folding” approach utilized a cost comparable number of Xilinx Alveo FPGA accelerator cards.

The goal of the benchmark was to compare the throughput performance of the classification-inference engine of both systems. To measure performance, Cortical.io classified sixteen different sets of data including well-known data sets such as Enron (Kaminski, Farmer, and Lokay), DBPedia, IMDb, PubMed, Reuters (R8, R52), Ohsumed, Web of Science, BBC news text and others.

Staggering results were achieved by the simultaneous application of three distinct innovative steps:  

  • Improving the machine learning approach by applying Semantic Folding.  
  • Using tooling that enabled the concurrent implementation of software, hardware and networking aspects of the Semantic Folding approach.  
  • Using the parallelism of large gate arrays, practically implemented using FPGA technology in form of COTS datacenter hardware from Xilinx.

Benchmark results

  BERT  Semantic Folding  Semantic Folding 
Hardware  AMD Epyc Milan + NVidia GPU  

(python) 

AMD Epyc Milan (java)  AMD Epyc Milan +  

4 card Xilinx FPGA (binary) 

MB / sec  0.18  18.42  528.30 
Acceleration  1x  100x  2,865x 
Energy / MB  2,260 mwh  15 mwh  0.46 mwh 
Efficiency  1x  150x  4,298x 

Benchmark results show that with Semantic Folding, the operations costs can be reduced from several dollars per classifier to a fraction of a cent, making large-scale classification use cases for the first time commercially viable. Example real world workloads could be hate-speech detection for nearly three billion Facebook users or content filtering the Twitter firehose for hundreds of millions of users.

“Efficiency is the new precision in Artificial Intelligence,” said Francisco Webber, CEO at Cortical.io. “While large industries are determined to use less energy, the AI and ML industry is headed in the opposite direction: growing its carbon footprint exponentially. The future of green computing hangs by the thread of high efficiency AI capabilities.” 

About Cortical.io 

Cortical.io delivers highly efficient AI-based solutions that help enterprises unlock the value of unstructured text by leveraging a game-changing approach to Natural Language Understanding (NLU). Cortical.io SemanticPro is an intelligent document processing solution that accurately extracts, analyzes and classifies information based on meaning and builds the basis for document workflow automation. With more than 10 years expertise in implementing NLU solutions in the enterprise, Cortical.io has demonstrated its ability to solve the challenges of language ambiguity and variability across many use cases and verticals for Fortune 500 companies. Cortical.io has offices in the U.S. (New York and San Francisco) and Europe (Vienna). For more information, go to https://www.cortical.io.


Source: Cortical.io

EnterpriseAI