Advanced Computing in the Age of AI | Friday, April 19, 2024

Here’s Why Enterprise AI Is Being Drafted to Fight Stimulus Fraud 

Without an enterprise AI approach, prosecutors who see fraud in the federal government’s Paycheck Protection Program admit there are too many scams to count, let alone stop. Organized crime is scheming to take a growing cut of the emergency spending in the CARES Act. The rules of stimulus programs are constantly changing, making it hard to know who should and shouldn’t obtain that financing or how they should spend it.

This sounds like a job for enterprise artificial intelligence, and banks are indeed turning to it for help. But what qualifies as AI in quelling stimulus fraud, and how exactly would it work, if it works at all?

Rules engines slipping under the waves

It is clear that common approaches, often billed as machine learning and sometimes as artificial intelligence, fail to address today’s stimulus fraud-fighting needs.

A bank’s anti-fraud officer has added new rules to its systems for flagging activity that looks suspicious, often based on dated government law enforcement data. They’ve introduced party and account level monitoring. They have tuned their system as often as they can. But under the pressures of the massive numbers of stimulus program checks, their alert backlog is increasing, their investigators are fatiguing, and their risk is escalating.

Buried under unmanageable volumes of false positives, risk officers are unable to identify false negatives. These are the worst, the existing bank customer for instance who has always stayed out of the spotlight, but with loosened “know your customer” rules under the stimulus program, they press their advantage.

Bank officers are also striving to meet the budgetary cost cutting measures imposed on them as their institutions try to keep compliance costs under control. These officers do the only thing they can by attempting to tune their thresholds once again, only to recognize that they can no longer tune their way out of trouble. K-Means clustering, as a safe go-to, does not provide the accuracy or uplift banks officers need.

Starting with basics

Simply put, anti-fraud teams need alerts to be more accurate and false positives rare. It gives investigators valuable context, so they can focus on what matters most, genuinely suspicious behavior.

An augmented anti-fraud process applies intelligence at key lever points to produce significantly more accurate alerts. It’s designed in three parts. They are system optimization, emerging behaviors detection, and new entity risk detection. This allows you to take advantage of just what you need, when you need it. That is, you get only what you need to improve the parts of your process that are weakest.

Knowns knowns, unknowns unknowns, and the rest

Optimizing a system is done best by focusing on improving the effectiveness in discovering “known knowns.” The key is to optimize an existing system with greater segmentation accuracy of all parties and improve the speed, accuracy and effectiveness of your periodic threshold tuning process.

Emerging behavior identification should be focused on “unknown knowns” and keeping your system relevant. Introduce dynamic, intelligent tuning and visibility to emerging behaviors to your process and retire the periodic projects that are so costly, cumbersome and immediately outdated.

New entity risk detection means discovering net new “unknown unknown” risks and vulnerabilities previously missed or not thought about. Identify and be alerted to new risks. Not just at a loan level, or account, or customer, but for any context, party or hierarchy and not just for stopping fraud, but for cyber, surveillance, conduct, trafficking, liquidity exposure, credit risk and beyond.

Segmenting for success

The false-positive problem in fraud detection is primarily a function of poor segmentation of the input data. Even sophisticated financial services institutions using machine learning for detecting fraud can suffer from low accuracy and high false negatives. This is because open source machine learning techniques analyze data in large groups and cannot get specific enough to correctly surface genuine suspicious behavior.

A typical segmentation process produces uneven groups, and this means that thresholds must be set artificially low – resulting in a significant number of false positives. Smart segmentation is the crucial first step for a system to accurately detect suspicious patterns, without needlessly flagging expected ones. The process falls short when institutions only sort static account information using pre-determined rules.

A good enterprise AI approach should ingest the greatest volume and variety of data available - about customers, counterparties and transactions - and then apply objective machine learning to create the most refined and up-to-date segments possible. Topological data analysis is perhaps one of the best tools for this given its ability to handle multiple variables, but it’s also not well known, even in the artificial intelligence field.

The crucial point is that enterprise grade anti-fraud AI needs to be able to assign, and reassign parties to segments based on their actual behavior, revealed in their real transactions and true inter-relationships, over time. An intelligent segmentation process should deliver far more granular and uniform groups, resulting in higher thresholds and fewer false positives. In addition, these granular groups should catch false negatives.

Paying dividends

The unknown questions that the data and proper enterprise AI can answer will create new opportunities and growth areas, too. High performance enterprise AI cuts the time it takes to produce insights, grows along with datasets, explores automatically and without bias, incorporates new data into older analyses and can actually reduce hardware costs.

Bank clients won’t necessarily appreciate these secondary machine learning benefits at first. They are measures that help managers detect and track patterns of fraud, not marketing tools. But they can provide winning insights and defensive alerts that will protect a company’s brand, public relations and image.

About the Author

Simon Moss is CEO of Symphony AyasdiAI, an enterprise artificial intelligence company serving financial services and other industries.

EnterpriseAI