Advanced Computing in the Age of AI | Friday, March 29, 2024

Ethics Moves Front and Center in AI Debate 

The breakneck pace of machine intelligence development is prompting welcome assessments of the ethical implications of a technology that will have a profound effect on workers, consumers and most every other segment of society.

With that sobering reality in mind, a new risk assessment released by federal contractor Booz Allen Hamilton seeks to move beyond the often-heated rhetoric associated with AI to explore “subtle but substantial ethical problems” posed by machine intelligence. (Rather than artificial intelligence, the authors instead focus on the overarching concept of machine intelligence, defined as machines augmenting humans to accomplish a specific task.)

The assessment comes as AI researchers at Google protest the company’s involvement in a Pentagon research project.

As machine intelligence moves beyond automating rote tasks to become a tool for banks, doctors and consumers, the risk assessment offers a look-before-you-leap framework for identifying and mitigating unintended consequences. An “MI Risk Triage Framework” ranks the unintended impact of machine intelligence initiatives from mere annoyance to financial and psychological harm, culminating with high-risk initiatives threatening physical harm.

Source: Booz Allen Hamilton

Prime examples of the last category include several self-driving car crashes in recent weeks.

Those and other recent incidents involving machine learning deployments “did not arise from intentional malicious action on the part of the technologies’ developers or implementers,” the risk assessment notes. “Instead, they largely arose from failure to proactively assess and mitigate the social implications of the technology.”

The risk assessment also considers the scale of deployments based on the number of users affected and the overall societal impact of unintended consequences.

Much of the current debate about the promise and pitfalls of machine intelligence focuses on privacy and its impact on the workforce. The Booz Allen (NYSE: BAH) risk assessment adds to the growing list of ethical concerns by including fundamental values like respect for human dignity and transparency into how the technology is being applied.

Another ethical concern is equity, and the need to find ways of reining in algorithmic bias as machine intelligence platforms are scaled. The study notes that the developer community frequently lacks diversity in terms of race, gender and socioeconomic status, “increasing the risk of equity-related oversights in systems development.”

Deep learning and other machine intelligence platforms are trained using a disproportionate amount of data produced by humans. The biases inherent in these data sets can be baked into machine intelligence systems, “thereby perpetuating and even amplifying existing societal biases,” the study warns.

“It is especially important that senior executives understand how their organizations are using [machine intelligence] and ensure it is deployed respectfully, transparently and equitably, as they will be held liable for unforeseen consequences,” the report concludes.

Among the fundamental ethical questions swirling around AI is its use in warfare. This week, thousands of Google employees signed a letter addressed to CEO Sundar Pichai protesting the company’s involvement in a Pentagon machine learning initiative dubbed Project Maven.

Also known as Algorithmic Warfare Cross-Functional Team, the DoD effort aims to develop algorithms used to analyze full-motion surveillance video, freeing analysts to focus on the cognitive analytical aspects of video interpretation. A proponent of the AI effort asserted that Project Maven is “not even close to pulling the trigger.”

Nevertheless, Google (NASDAQ: GOOGL) employees said company involvement in the project “will irreparably damage Google’s brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust.”

The talent competition heated up again this week when Google’s AI chief, John Giannandrea, left to join rival Apple (NASDAQ: AAPL).

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI