Advanced Computing in the Age of AI | Friday, March 29, 2024

DarwinAI Team Publishes Key Explainability Paper, Works to Improve Industry-Wide Trust in AI 

VANCOUVER, British Columbia, Dec. 9, 2019 -- DarwinAI, a Waterloo, Canada startup creating next-generation technologies for Artificial Intelligence development, today announced that the company has conducted academic research that answers a key industry question: “How can enterprises trust AI-generated explanations?”

Explainability has been key in addressing AI’s “black box” problem as it is nearly impossible for a human to understand how deep neural networks make decisions. To date, there’s been limited assessment of explainability methods within the nascent deep learning field, and most existing evaluations focus on subjective visual interpretations.

The paper, authored by the DarwinAI team, espouses a machine-centric strategy to quantify the performance of explainability algorithms and will be presented at NeurIPS 2019, one of the most prestigious AI conferences in the industry. DarwinAI, which was also recently named a Gartner “Cool Vendor,” is working on a new version of its explainability platform that will offer additional features to bolster enterprises’ understanding of and trust in AI.

“The question of how deep neural networks make decisions has plagued researchers and enterprises alike and is a significant roadblock to the widespread adoption of this particular form of AI,” said Sheldon Fernandez, CEO, DarwinAI. “It is critical that enterprises obtain some understanding of how a neural network reaches its decisions in order to design robust models with a certain level of trust.”

“Explainability in neural networks has been a core concept for deep learning engineers – a necessary and crucial goal for our industry,” said Drew Gray, CTO of Voyage, an autonomous driving company working with DarwinAI. “With this research, the DarwinAI team has introduced concrete performance metrics for explainability that also highlight the benefits of their own approach. Their toolset takes the concept to the next level by translating explainable insights into recommendations for both model design and dataset augmentation. The latter is particularly exciting for us.”

DarwinAI Research Momentum at NeurIPS

DarwinAI’s paper, “Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms,” explores a machine-centric strategy for quantifying the performance of explainability methods on deep neural networks.

Essentially, the team subjected a deep learning network to a clever psychology test, removing explanatory variables and having the network reevaluate the result to determine the efficacy of a given algorithm. The team conducted a comprehensive analysis using this approach on several state-of-the-art explainability methods, including LIME, SHAP, Expected Gradients and GSInquire, their own proprietary technique.

The DarwinAI team, which presented five papers at NeurIPS 2018, will showcase the company’s explainability metrics research, along with four additional research papers at NeurIPS 2019.

In one study, the team introduced YOLO Nano, a highly compact deep convolutional neural network designed for embedded object detection on edge and mobile devices. The model was generated using the company’s Generative Synthesis platform, which optimizes neural networks using AI itself to reduce their computational requirements. Moreover, the technology is complementary to hardware toolkits that improve performance on specific chipsets. For example, engineers were able to dramatically accelerate the inference performance of Generative Synthesis by leveraging embedded Intel Deep Learning Boost technology on 2nd Gen Intel Xeon Scalable processors. This combination constitutes a powerful offering for deep learning practitioners.

“Intel and DarwinAI frequently work together to optimize and accelerate artificial intelligence performance on a variety of Intel hardware,” said Wei Li, vice president and general manager of Machine Learning Performance at Intel. “But in addition to performance, we are very supportive of their research into algorithm transparency and explainability, which will help make AI deployments fair, auditable and ethical.”

Continued Industry Accolades and Product Momentum

In addition to spearheading industry-leading research, DarwinAI has also been recognized recently with the following honors and designations:

  • Gartner “Cool Vendors” -- DarwinAI was named a “Cool Vendor” in Gartner’s October 2019 Cool Vendors in Enterprise AI Governance and Ethical Response report. According to the report, “These vendors help organizations better govern their AI solutions, and make them more transparent and explainable.” The report goes on to state, “The vendors in this research all apply unique and novel approaches to helping organizations increase their governance and explainability of AI solutions. This is the theme of our selection of vendors for this report, which focuses on profiled companies that employ a variety of AI techniques to transform ‘black box’ ML models into easier to understand, more transparent ‘glass box’ models.” The Cool Vendor report recognizes “emerging vendors that data and analytics leaders should watch.”
  • Timmy Awards -- In October, DarwinAI was voted Best Tech Startup/Community Favorite (Toronto) at Tech in Motion’s Timmy Awards. This award recognizes a local startup “with an entrepreneurial spirit that employs forward-thinking leaders, possesses a great work environment, and produces an innovative product that aims to disrupt the market.”
  • Impact 50 -- DarwinAI moved up two spots to number 40 on Inside Big Data’s Q4 2019 Impact 50 list. This is a list of the most important movers and shakers in the big data industry. The companies who gain a place in this list have proved their value with leading-edge products and services. The company was also included on the IMPACT 50 list for Q1 2019 (#48), Q2 2019 (#44) and Q3 2019 (#42).

Upcoming Explainability Product Availability and Details

DarwinAI released the first version of its explainability platform, enabled by the company’s Generative Synthesis technology, to enterprise customers in Summer 2019. The company is currently updating and testing the platform with select clients, with plans for a commercial GA release in early 2020.

About DarwinAI

Founded by renowned academics at the University of Waterloo, DarwinAI’s Generative Synthesis technology represents the next evolution in AI development, demystifying the complexities of deep learning neural networks while unraveling their opaqueness. Based on years of distinguished scholarship, the company’s patented AI-assisted platform enables deep learning design, optimization and explainability, with a special emphasis in enabling AI at the edge, where computational and energy resources are limited. To learn more about DarwinAI, visit their website at www.darwinai.ca or follow them @darwinAI on Twitter.


Source: DarwinAI 

EnterpriseAI