Advanced Computing in the Age of AI | Friday, April 19, 2024

NSF Funds Cybersecurity, Trusted AI Research 

The National Science Foundation will fund a broad range of cybersecurity, data privacy and network security research projects that includes a new Center for Trustworthy Machine Learning.

NSF’s Secure and Trustworthy Cyberspace initiative will support 225 cyber and network security, privacy, cryptography and AI projects totaling $78.2 million, the agency announced Wednesday (Oct. 24). Funding for the machine learning center totals $10 million over five years.

The security initiative will advance research aimed at protecting “cyber systems from malicious behavior, while preserving privacy and promoting usability,” said Jim Kurose, NSF’s assistant director for computer and information science and engineering.

"Our goal is to identify fundamentally new ways to design, build, and operate secure cyber systems at both the systems and application levels, protect critical infrastructure, and motivate and educate individuals about security and privacy," Kurose added.

Of particular emphasis are vulnerabilities in AI-based systems ranging from image recognition to malware detection models. Many can be compromised while they are being trained. Hence, the new machine learning center will focus research on: methods to defend trained models from attacks; improving model and training data “robustness”; and developing countermeasures to attacks based on “abuse [of] generative machine learning models” that often rely on unsupervised learning.

Researchers have found that the algorithms and processing systems used for machine learning are vulnerable to attack. “We have a unique opportunity at this time, before machine learning is widely deployed in critical systems, to develop the theory and practice needed for robust learning algorithms that provide rigorous and meaningful guarantees,” said Patrick McDaniel, lead principal investigator and professor of electrical engineering and computer science at Penn State University.

The need for “trustworthy AI” has emerged in critical fields like medicine where physicians have grown leery of relying on “black box” systems that so far are unable to explain the basis for conclusions or predictions. Hence, there is growing demand for these machine learning systems to “show their work” as a way of building trust and boosting adoption.

That requirement has also spawned a field of predictive computing dubbed “explainable AI.”

Other collaborators in the machine learning security initiative include Stanford University, University of Virginia, University of California-Berkeley, University of California-San Diego and the University of Wisconsin-Madison.

A complete list of research projects funded under the NSF security initiative is here. NSF funding for the projects ranges from up to $500,000 for three years or up to $1.2 million for up to four years.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI