Advanced Computing in the Age of AI | Monday, April 29, 2024

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report 

Credit: Stanford HAI

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to track, distill, and visualize data specific to the world of AI.

With today’s release of HAI’s seventh annual AI Index Report, Stanford’s researchers are hoping to give decision-makers the knowledge they need to responsibly and ethically integrate this technology into their routines. The full report, which spans nearly 400 pages, is packed full of information about the state of AI.

The following consists of some of the most important points gleaned from the full report:

Industry is Driving AI Development

While the report mentions that academia dominated the world of machine learning models up until 2014, this is clearly no longer the case. In 2023, the report found 51 notable machine learning models that were produced by private industry.

Credit: Stanford HAI

This is compared to just 15 models originating from academia and 21 models in industry-academic collaborations. Government-owned models rounded out the bottom of the list with 2 models.

This shift seems to be related to the resources required to run these machine learning models. The enormous amounts of data, computing power, and money needed are simply outside the reach of academic institutions. This shift was first noticed in last year’s AI Index report, although the gap between industry and academia seems to have narrowed slightly.

AI's Transformative Economic Impacts

The report found an interesting trend concerning global investment in AI. While private investment in AI as a whole nearly doubled between 2020 and 2021, it has slightly decreased since then. Investment in 2023 dropped 7 percentage points to $95.99 billion compared to 2022, which saw an even larger drop from 2021.

Credit: Stanford HAI

In terms of the Gartner Hype Cycle, it would appear that the “Peak of Inflated Expectations” occurred in 2021. If so, the small dip in the “Trough of Disillusionment” currently reflected in global investment would indicate that the market still sees a large amount of value in AI.

Additionally, while overall investment in AI dropped slightly, private investment in generative AI specifically exploded. In 2023. Investment in this area increased to $25.2 billion, which was a ninefold increase over 2022 and nearly a 30-times increase over 2019. In fact, about a quarter of all AI investment in 2023 could be attributed to generative AI specifically.

Credit: Stanford HAI

Additionally, to complement the amount of money invested, AI is also providing cost reductions and revenue enhancements to organizations that implement it. Overall, 42% of respondents reported const decreases as a result of AI implementation while 59% reported revenue gains. Compared to last year, organizations saw a 10 percentage point increase for cost decreases and a 3 percentage point decrease for revenue increases.

Looking more granularly, the three industries that most frequently reported decreases were manufacturing (55%), service operations (54%), and risk (44%). For revenue gains, the industries most likely to report a benefit were manufacturing (66%), marketing and sales (65%), and strategy and corporate finance (64%).

Lack of Standardized Responsible AI Evaluations

As society more deeply integrates AI into daily operations, there is a growing desire to see responsibility and trustworthiness in the technology. The report specifically mentioned the responsible benchmarks TruthfulQA, RealToxicityPrompts, ToxiGen, BOLD, and BBQ and tracked their year-over-year citations. While citations do not perfectly reflect benchmark use, they do serve as a general bellwether for the industry’s focus on them. Every benchmark mentioned saw more citations in 2023 than 2022, which would indicate that organizations are taking responsible AI seriously.

That said, the AI Index also mentioned that a standardized benchmark for reporting responsible AI is lacking. The report mentions that there is no universally accepted set of responsible AI benchmarks. TruthfulQA is used by three out of the five selected developers, while RealToxicityPrompts, ToxiGen, BOLD, and BBQ were only utilized by one out of the five developers.

Clearly, the industry must settle on responsible AI benchmarks and begin to standardize as soon as possible.

AI Accelerating Scientific Breakthroughs

AI has proven time and again that it is a deeply useful tool in the arena of scientific discovery. The report makes mention of several science-related AI applications that made major strides in the field in 2023:

  • AlphaDev: An AI system by Google DeepMind that makes algorithmic sorting more efficient.
  • FlexiCubes: A 3D mesh optimization tool that uses AI for gradient-based optimization and adaptable parameters, thereby improving a wide variety of scenarios in video games, medical imaging, and beyond.
  • Synbot: Synbot integrates AI planning, robotic control, and physical experimentation in a closed loop, enabling autonomous development of high-yield chemical synthesis recipes.
  • GraphCast: A weather forecasting tool that can deliver highly accurate 10-day weather predictions in under a minute.
  • GNoME: An AI tool that facilitates the process of materials discovery.

The report also broke down some of the more influential AI tools in medicine:

  • SynthSR: An AI tool that converts clinical brain scans into high-resolution T-1 weighted images.
  • Coupled Plasmonic Infrared Sensors: AI-coupled plasmonic infrared sensors that can detect neurodegenerative diseases such as Parkinson’s and Alzheimer’s.
  • EVEscape: This AI application is capable of forecasting viral evolution to enhance pandemic preparedness.
  • AlphaMIssence: Enables better classifications of AI mutations.
  • Human Pangenome Reference: An AI tool to help map the human genome.

The report also found that highly knowledgeable medical AI is here and in use. AI systems have significantly improved over the last few years on the MedQA benchmark, which is a crucial test for evaluating AI's clinical expertise. With an accuracy rate of 90.2%, the most notable model of 2023—GPT-4 Medprompt—achieved a 22.6 percentage point improvement over the top score of 2022. Artificial intelligence (AI) performance on MedQA has almost tripled since the benchmark's launch in 2019.

Additionally, the FDA is finding many uses within the AI space. The FDA authorized 139 AI-related medical devices in 2022, up 12.9% from the previous year. The quantity of AI-related medical devices that have received FDA approval has more than quadrupled since 2012. AI is being applied more and more to practical medical issues.

AI Education and Talent "Brain Drain"

Although AI tools can make many jobs easier for their living counterparts, humans must still play a role in the technology’s development and advancement. As such, the report detailed the human workforce behind the AI revolution.

To begin, the number of American and Canadian Computer Science (CS) bachelor’s and PhD continues to rise, despite new CS Master’s graduates staying relatively flat. The data from 2011 showed about equal numbers of newly graduated PhDs in AI finding employment in academia (41.6%) and industry (40.9%). But by 2022, a much higher percentage (70.7%) entered the workforce after graduating than went on to pursue higher education (20.0%). The percentage of PhDs in AI headed for industry has increased by 5.3 percentage points in the last year alone, suggesting a “brain drain” of academic talent into industry.

Additionally, AI-related degree programs are on the rise globally. The number of English-language AI postsecondary degree programs has tripled since 2017, which has shown steady annual growth over the past five years. This shows that universities all over the world are seeing the advantages of offering more AI-focused degree programs.

EnterpriseAI