Advanced Computing in the Age of AI | Thursday, May 2, 2024

Intel Joins the MLCommons AI Safety Working Group 

Oct. 27, 2023 -- Intel announced it is joining the new MLCommons AI Safety (AIS) working group alongside artificial intelligence experts from industry and academia. As a founding member, Intel will contribute its expertise and knowledge to help create a flexible platform for benchmarks that measure the safety and risk factors of AI tools and models. As testing matures, the standard AI safety benchmarks developed by the working group will become a vital element of our society’s approach to AI deployment and safety.

Deepak Patil, Intel corporate vice president and general manager for Data Center AI Solutions, commented: “Intel is committed to advancing AI responsibly and making it accessible to everyone. We approach safety concerns holistically and develop innovations across hardware and software to enable the ecosystem to build trustworthy AI. Due to the ubiquity and pervasiveness of large language models, it is crucial to work across the ecosystem to address safety concerns in the development and deployment of AI. To this end, we’re pleased to join the industry in defining the new processes, methods and benchmarks to improve AI everywhere.”

Responsible training and deployment of large language models (LLMs) and tools are of utmost importance in helping to mitigate the societal risks posed by these powerful technologies. Intel has long recognized the importance of the ethical and human rights implications associated with the development of technology, especially AI.

This working group will provide a safety rating system to evaluate the risk presented from new, fast-evolving AI technologies. Intel’s participation in the AIS working group is the latest commitment in the company’s efforts to responsibly advance AI technologies.

The AI Safety working group is organized by MLCommons with participation from a multidisciplinary group of AI experts. The group will develop a platform and pool of tests from contributors to support AI safety benchmarks for diverse use cases.

Intel plans to share AI safety findings and best practices and processes for responsible development such as red-teaming and safety tests. The full list of participating members can be found on the MLCommons website.

The initial focus of the working group will be to develop safety benchmarks for LLMs, building on the groundwork of researchers at Stanford University’s Center for Research on Foundation Models and its Holistic Evaluation of Language Models (HELM). Intel will share its rigorous, multidisciplinary review processes used internally to develop AI models and tools with the AIS working group to help establish a common set of best practices and benchmarks to evaluate the safe development and deployment of a generative AI tool leveraging LLMs.

For more on the MLCommons AI Safety Working Group, click here.


Source: Intel

EnterpriseAI