Advanced Computing in the Age of AI | Friday, June 9, 2023

US Takes a Page from Supercomputing Past to Boost AI Research 

A pattern is emerging on how U.S. government wants to boost its AI research. The approach is similar to how the early supercomputing infrastructures were built: engage academia and national labs, then apply the research to solve critical domestic issues.

The National Science Foundation (NSF) on Tuesday announced it was awarding $140 million to universities to promote fundamental research in artificial intelligence.   The funding is targeted at specific institutions to address public-sector issues like cybersecurity, climate change, agriculture, public health and education. The NSF is also funding fundamental research for the building blocks of AI so models in the future are ethical, trustworthy and accessible. 

The need for coordinated initiatives in AI

"From the perspective of advancing ethical research and figuring out responsible use cases, the first step is foundational research. This is best placed at academic institutions and universities," said Hadan Omaar, a senior analyst focusing on AI policy at the Information Technology and Innovation Foundation, a think tank based in Washington DC. 

There has been a pushback against AI research being done by the private sector over the last few years, "so there's an effort needed by the government to balance this out," Omaar said. 

The AI research being done by the universities is not market-oriented, instead focusing more on solving public sector problems that are high on the U.S. agenda. Similarly, supercomputers at national laboratories are being prioritized for tasks like economic modeling and weapons development. 

The U.S. government is increasingly concerned about the responsible and safe use of AI. The fast growth of AI tools like ChatGPT has concerned U.S. cybersecurity officials. Many U.S. law-enforcement agencies, including the Federal Trade Commission (FTC) and the Department of Justice (DOJ), late last month expressed concerns about the harmful use of artificial intelligence and automated systems to break laws. The agencies said they would use laws and enforcement actions to promote the responsible use of AI. 

“Technological advances can deliver critical innovation – but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition,” said FTC chair Lina Khan in a statement.

The AI arms race

NSF is funding these AI projects as the U.S, the European Union and China engage in an AI arms race. The EU is considering AI legislation called the Artificial Intelligence Act, which would set boundaries for AI development to ensure safe and desirable outcomes. 

A string of private sector AI breakthroughs from OpenAI, Microsoft and Google have put the U.S. ahead of China in AI. Previously, China was seen as a global leader because of its use of AI in efforts like the social credit system. The Chinese system rates individuals through using facial recognition software, which uses data from over 200 million surveillance cameras in the country, HR firm Horizons said on its website.  

The U.S. government is not competing with the EU and China on issues like climate change or agriculture, and the private sector won’t provide tools to solve those problems, Omaar said. The White House is trying to get a handle on these domestic problems with AI, and for decades has relied on academia to find answers, Omaar continued. 

The programs funded by the NSF will add to a stronger knowledge base, and will eventually factor into the AI race against China and the EU.  “Ultimately, the U.S. will have to economically stay competitive in AI,” Omaar said. 

Geographic diversity

Omaar noted that the NSF funding for AI programs was sprinkled across different geographies, which was one way to keep all regions engaged.  She took note of NSF funding a group led by the University of Minnesota, Twin Cities to add foundational AI knowledge related to agriculture and forestry. It is close to the U.S. heartland, very relevant to the region, and will boost the university’s profile to better compete with other research institutions. 

“NSF is trying to spread access across the country. If it is one in forestry, it is in a university that struggles with the issue,” Omaar said. 

The NSF has opened national institutes at academic and research institutions to further its AI agenda, the agency said in a statement. "Today’s investment means the NSF and funding partners have now invested close to half a billion dollars in the AI Institutes research network, which reaches almost every U.S. state," the NSF said in a statement.  

Areas of funding

The funding includes access to computing resources – possibly at supercomputing centers – such as “central processing unit (CPU) and graphics processing unit (GPU) options with multiple accelerators per node, high-speed networking, and sufficient memory capacity (i.e., at least one terabyte per node),” according to a January document from the National Artificial Intelligence Research Resource Task Force, which was created by the NSF and the White House Office of Science and Technology Policy (OSTP) to guide the AI research roadmap. 

Among the other NSF-funded AI endeavors: a team led by the University of California at Santa Barbara will look at AI cyber-agents to catch and prevent hacks (cybersecurity specialists believe that tools like ChatGPT could be used to create code that could hack systems, or draft letters for social engineering); a group led by the University of Maryland will focus on a program to develop a framework for unbiased AI systems that can be trusted across racial and gender divides; a team led by Columbia University will try to establish connections between AI and how the human brain processes information; and the University at Buffalo and other institutions will focus on AI systems to assess if kids need interventions. Two other programs funded by the NSF will look at AI in decision-making and education. 

A larger U.S. effort

The NSF funding initiative was part of a larger White House announcement around the safe and responsible use of AI.  The White House managed to snag commitments from top AI companies including Google, Microsoft, OpenAI and Nvidia to evaluate AI systems based on a platform developed by Scale AI during the DEFCON 31, which will be held in Las Vegas in August. 

“This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models,” the White House said in its statement. 

It is not clear if large-language models such as OpenAI’s GPT-4, which is the brain behind Microsoft’s BingGPT, will be open for evaluation. The model is closed, unlike GPT-3, which is used by ChatGPT and is open-source. 

Looming ubiquity

The release of ChatGPT late last year set off a storm of activity around generative AI, which has boosted public and private investment in the technology. A survey released by the World Economic Forum this week stated that "generative AI has received particular attention recently, with claims that 19% of the workforce could have over 50% of their tasks automated by AI." About 75% of the companies surveyed by WEF plan to put artificial intelligence into daily operations in the coming years. 

Microsoft released BingGPT, which is based on GPT-4, for testing a few months ago, and the early results were poor. The AI tool was often hallucinating and provided inaccurate or moody answers. Google hasn’t risked releasing a wide beta of an untested tool and has a lot more at stake in the event of a backlash; Microsoft had little to lose by making its AI technology public because of its small 2.79% search engine market share in April, which dwarfs in comparison to Google's 92.61% market share, according to numbers from StatCounter.