Advanced Computing in the Age of AI | Tuesday, March 19, 2024

NIST Tackles Explainable AI Gap 

via Shutterstock

Among the best ways to create stable technologies are standards and specifications that provide a template for building trust while often seeding new technological ecosystems. That’s especially true for AI, where lack of trust and inability to explain decisions has hindered innovation and wider enterprise adoption of AI platforms.

Indeed, early corporate AI deployments have underscored users’ unwillingness to base critical decisions on opaque machine reasoning.

A new initiative launched by the U.S. National Institute of Standards and Technology (NIST) proposes four principles for determining how accurately AI-based decisions can be explained. A draft publication released this week seeks public comments on the proposed AI explainability principles. The comment period extends through Oct. 15, 2020.

The draft “is intended to stimulate a conversation about what we should expect of our decision-making devices,” the agency said Tuesday (Aug. 18), encouraging feedback from engineers, computer scientists, social scientists and legal experts.

“AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why,” said Jonathon Phillips, a NIST electronic engineer and report co-author. “But an explanation that would satisfy an engineer might not work for someone with a different background.”

Hence, NIST is casting a wider net to collect a range of views from a diverse list of stakeholders.

The proposed AI principles take a systems approach, emphasizing explanation, meaning, accuracy and “knowledge limits”:

  • AI systems should deliver accompanying evidence or reasons for all their outputs.
  • Systems should provide explanations that are meaningful or understandable to individual users.
  • Explanations should correctly reflect the system’s process for generating the output.
  • The system only operates under conditions for which it was designed, or when the system achieves sufficient confidence in its output.

Expanding on the final principle, NIST said AI-based systems lacking sufficient confidence in a decision should therefore refrain from supplying a decision from a user.

The NIST initiative builds on nuts-and-bolts industry efforts to develop AI platforms that explain how decisions were reached. For example, Google Cloud launched a collection of frameworks and tools late last year designed to explain to users how each data factor contributed to the output of a machine learning model.

NIST officials also emphasized the need for closer study of human-machine interactions, collaborations they stress have yielded greater accuracy in explaining how decision are reached using AI algorithms.

“As we make advances in explainable AI, we may find that certain parts of AI systems are better able to meet societal expectations and goals than humans are,” Phillips added. “Understanding the explainability of both the AI system and the human opens the door to pursue implementations that incorporate the strengths of each.”

NIST’s AI explainability portal is here.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI