Covering Scientific & Technical AI | Wednesday, October 9, 2024

Is Your AI Ready to Be Regulated? Lessons from GDPR 

It’s been six years since European Union (EU) passed the General Data Protection Regulation (GDPR), a wide-ranging and complex regulation intended to strengthen and unify data protection for all individuals within the EU. Now, as analogous regulations around artificial intelligence (AI) are fomenting in multiple parts of the world, the business world’s experience with GDPR can inform how companies prepare for inevitable regulatory scrutiny of their AI—or not.

Companies should face facts

Certain parts of GDPR caused not a small amount of corporate panic, because the then-new regulation required companies to provide an accurate and understandable explanation of how analytics (particularly machine learning models) made decisions. The regulation empowered individuals to demand and receive explanations of automated decision-making, although few consumers have forcefully exercised their rights in this area.

Still, although GDPR has been with us for six years, any panic at its onset has not spurred a single industry standard for machine learning explainability. Because we are still looking at definitive standards for understanding and controls for analytics, the road to eventual AI regulation, more broadly, is also likely going to be a rough and non-heterogeneous.

The fact is, government regulation of how AI technology is developed and used is inevitable. One of the primary reasons is that AI self-regulation does not work.

Research conducted in 2021 shows there’s no consensus among executives about what a company’s responsibilities should be when it comes to AI. For example, only 22% of companies that participated in the research study have an internal ethics board.

Anecdotally, I’ve had multiple conversations with executives who feel that AI applications don’t need to be ethical, they just need to be classified as high or low risk. Others struggle with not having the tools to determine what is ‘fair enough,’ and standards of where to draw the line on what constitutes bias.

There is a lot of striving, of wanting to be ethical, but not as much support for defining ‘ethical AI’ in specific, measurable, clear terms. If the collective corporate non-response to GDPR’s explainability component is indicative of how organizations will react to nascent AI regulations, they will struggle to understand how the new regulation applies (and what parts), how compliance will be measured, and where the thresholds are for passing or failing. This chaotic mix of interpretations, measures and thresholds will spell confusion.

We need AI rules of the road

For any AI regulation to succeed it needs to be like a highway system—there are speed limits, and violations are objectively measured and ticketed. So, since companies and industries can't seem to agree on how an analytic decision or an AI model should be explained, experts need to be brought in and empowered to make hard decisions: define the specific tools and algorithms that are acceptable, and standardize pass/fail metrics that industries will be measured by, instead of porous ‘self-reporting’ standards and the mass confusion about how to meet them.

In this way, there can be objective measurement of how the AI were developed, and what it does: did it do the job well and correctly, or poorly and wrong?

Certain industries have a head start on highway systems developed for the development of analytic models, analytic decisions, violations and tickets, as overseen by federal regulators. The mortgage lending industry is a good example; credit decisions are made within guidelines designed to stamp out bias and discrimination. Lenders that don’t follow the rules of the road (who use biased data, decision criteria or models) face stiff penalties from regulators, and eventual loss of consumer trust and business. But the highway system remains a work in progress; even though the mortgage lending industry is ahead of the pack, arguments still abound on how to measure bias, fairness and the thresholds of pass/fail.

Legal threats will drive AI ethics

In my view, the rise of AI advocacy groups, which wield much more power than the individual complaints anticipated by GDPR and related regulations. These groups are driving more awareness on the impacts AI has on consumers’ lives—and, of course, the potential for legal liability for biased AI. More than anything, risk exposure often drives companies to address AI ethics and bias issues, particularly as innovative companies that use AI are, empirically speaking, the juiciest targets for class action suits.

Risk exposure (and eventually running afoul of government regulation) is important to drive engagement and if anything, should elevate AI to be a Board-level topic. If Chief Risk Officers (CROs) are not tracking AI risk, they should be. More pointedly, CROs should be championing comprehensive corporate AI governance frameworks that define Ethical AI standards and model development procedures in a way that they can stand behind, and that stand up to regulatory scrutiny.

While GDPR may not have driven an explosion of individual consumer concerns about the inner workings of analytics decisions, even experts view AI’s powerful momentum as decidedly ominous. A 2021 Pew Research report noted:

A large number of [expert] respondents argued that geopolitical and economic competition are the main drivers for AI developers, while moral concerns take a back seat. A share of these experts said creators of AI tools work in groups that have little or no incentive to design systems that address ethical concerns.

To avoid the backlash from lawsuits, consumer mistrust, advocacy groups and, eventually, more widespread government regulation, companies need to grow up and own the way they design AI systems, and confront and manage AI risk.

About the Author

Scott Zoldi is chief analytics officer at FICO responsible for the analytic development of FICO's product and technology solutions. While at FICO, Scott has been responsible for authoring more than 110 analytic patents, with 71 granted and 46 pending. Scott is actively involved in the development of new analytic products and Big Data analytics applications, many of which leverage new streaming analytic innovations such as adaptive analytics, collaborative profiling and self-calibrating analytics. Scott is most recently focused on the applications of streaming self-learning analytics for real-time detection of cyber security attacks. Scott serves on two boards of directors, Software San Diego and Cyber Center of Excellence. Scott received his Ph.D. in theoretical and computational physics from Duke University.

About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.

AIwire