Advanced Computing in the Age of AI | Thursday, April 18, 2024

Report Calls for U.S. AI Strategy Based on Trust 

The rise of machine intelligence has prompted policy wonks to weigh in with a list of caveats along with recommendations for preserving the American technology lead in AI and machine learning while initiating the process of managing future risks.

The respected Center for Strategic and International Studies (CSIS) released a report on Thursday (March 1) calling for a national machine intelligence strategy. Underwritten by U.S. technology contractor Booz Allen Hamilton (NYSE: BAH), the study takes a “look-before-you-leap” approach to machine intelligence development while advocating steps for maintaining the current U.S. lead.

The report recommends “safe and responsible” development of machine intelligence by funding long-term R&D in areas the private sector has little incentive to invest. Along with risky, long term research, the government should focus on national security applications and “systems of ethics and control” akin to government agencies that help referee disputes over technology standards.

The study also addresses growing concerns about the unforeseen consequences of machine learning deployments, acknowledging the need to “manage public anxiety.”

“While the apocalyptic warnings voiced by some in the tech community are overblown, [machine intelligence] systems will raise new challenges in the areas of privacy, algorithmic bias, system safety and control,” the report states. “The U.S. government can help confront these risks by leading in the development of safety, ethics, and control standards for [machine intelligence], and working with the private sector to develop methods of testing and certification for [those] systems.”

The authors note that China announced a national artificial intelligence strategy last year that emphasizes applications like surveillance and crime prediction. Russian President Vladimir Putin stressed the strategic importance of AI in an address last year.

(The CSIS report generally refers to the automation technology as “machine” rather than “artificial” intelligence, defining machine intelligence as “perform[ing] tasks normally requiring human intelligence.”)

Given the strategic stakes, the CSIS report also recommends a workforce initiative specifically geared to machine intelligence development, including a renewed emphasis on computer science education and technical skills like programming and AI model development. Those recommendations dovetail with recent efforts to plug a growing data science skills gap as big data and cheap, ubiquitous computing underpin the expansion of machine learning.

Meanwhile, the sector has seen an explosion of open-source tools for training AI models and gradually moving them production. The report calls for expanding government open data efforts that both protect privacy while improving the quality of model training data.

It also recommends development of standards to improve data quality, a step that would complement industry data management efforts focused on metadata considerations like data types and formats.

The lack of a governance framework threatens to undermine public trust in machine automation, the report warns. It cited last year’s censure of Google’s (NASDAQ: GOOGL) Deep Mind unit for mishandling U.K. patient records that were used to train machine learning algorithms for medical applications. CSIS also warned that the rise of bots used to disrupt elections could grow in sophistication, making fake social media accounts nearly indistinguishable from humans.

These and other concerns like the rise of AI-powered trading algorithms tied to recent stock market “flash crashes” require a new set of policies to harness the technology. “These discussions will take time but will help to guide the efforts of policymakers and ensure that [machine intelligence] governance develops in partnership with industry, not in opposition to it,” the report concludes.

The private sector has acknowledged at least some of these concerns while arguing that the benefits outweigh the risks. “AI will be ubiquitous,” Arief Bavan, a Microsft data architect, told an industry conference this week. “You’ll be utilizing AI without even realizing it.” The spread of AI systems means they “will know more about us than we know about AI,” a prospect Bavan conceded was “scary.”

Other practitioners note early efforts to “humanize” AI as a way of building trust. “The ‘A’ part of AI is actually going down,” asserted Uday Kamath, chief analytics officer at Digital Reasoning, a cognitive computing vendor that works with banks and U.S. intelligence agencies. Kamath told the data conference that machines are acquiring more “natural intelligence.”

Another key to building trust is greater transparency in areas like the use and misuse of customer and patient data.

The CSIS study notes that Canada, France and the U.K. have announced national strategies for promoting machine intelligence that emphasize “norms around privacy, equity and transparency.” The authors note that opaque machine intelligence platforms would “create significant safety and legal risks” for applications ranging from medical diagnostics to credit scoring.

 

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI