Advanced Computing in the Age of AI | Thursday, April 25, 2024

AI: Fact versus Fiction 

AI adoption inside U.S. companies  is soaring, with a recent PwC study concluding that 86 percent of 1,032 IT respondents believe that AI will be a mainstream technology for their companies in 2021.

The problem, though, is that under closer inspection, some AI is nothing more than marketing hype once you look under the hood.

AI is used nowadays as a generic term to describe a host of products, many of which do not replicate human intelligence, which is one of the hallmarks of true AI. Marketers today often brand any decision-making logic in software as AI, even if there's nothing intelligent about it at all.

For CIOs looking to integrate AI, what they need is a way to evaluate if an AI product or service is actually harnessing the power of AI– or if it's a mirage.

To help cut through the hype, organizations that are considering the purchase, implementation and use of AI products need to look for the four telltale signs that indicate if something truly uses AI or if it is simply a watered-down version of the technology.

  1. If the algorithm doesn't exhibit judgment, then it's not really AI

 An AI algorithm will dynamically learn the critical factors that lead to a particular outcome. For example, in IT operations, traditional rules-based software might flag a warning whenever a server's CPU utilization is more than 90% for a sustained period of time. But with AI, the software over time learns that the danger arrives only when CPU utilization goes above 94% for more than 125 seconds on a company’s servers. This is a pattern learned over time, with a clear correlation. If that utilization level changes later, true AI would adjust its correlation without human intervention to reflect the new inputs. If such a decision remains static and unchanged over time, then it's based on rules rather than on true AI. With AI, the algorithm should detect patterns that evolve and change over time. Rules-based decisions, however, are not true AI.

  1. If the algorithm's answer stays constant, then it's not really AI

 With true AI, it has already been established that decisions should change over time as the AI learns. Smartphone users see this occurring in the predictive text on their handsets. If a phone user has a colleague named Michelle whom they regularly message, they might expect that typing the first letters "Mi" would result in the name Michelle appearing as in the frequently used words list as they type. But if Michelle leaves the company and the phone user starts working with a new colleague named Michael, the user would expect that typing the letters "Mi" to quickly adjust to "Michael" as the first option. That’s because as the algorithm processes new data, the decision-making continually adapts and improves. If a vendor states that the answer is always the same with what they say is their AI product, then you should ask about the rules underpinning the decision-making. True AI is a dynamic tool that learns and makes deterministic decisions, so the model's decisions should continually improve. If they do not, it is not really AI.

  1. If the algorithm always gets the correct answer, then it's not really AI

The flip side of learning and improving is that, at least initially, AI must sometimes make the wrong decisions; otherwise, how can it improve? Consider an AI-powered fraud detection algorithm at a credit card company. As the algorithm learns, it should make better decisions and catch suspicious transactions more accurately over time. However, at the start, it will allow some transactions that are actually fraudulent. A vendor that claims their software always gives the correct answer can't be using true AI. AI must learn and evolve, and the answer should be specific to each organization. For example, if you deploy the same fraud detection AI product to Bank of America and to the Bank of Singapore, the results will be different. That’s because the AI learns from the different data sets, and there will be some transactions that the AI used by Bank of America will designate as suspicious, whereas the AI used by Bank of Singapore might decide they are authentic.

  1. If the algorithm detects and doesn't respond, then it's not valuable

 For AI to add value, it must respond, in addition to detecting things. This is the major weakness of many alleged AI solutions. They identify a pattern but fail to learn and respond to the actual data. Security software using AI that detects when someone is attacking your systems is good when it notifies you of such incidents, but you also want it to take some action that prevents the attack without impacting valid users. If AI is not doing anything with the information, then is it actually helping your organization? AI's strength is its problem-solving ability in response to what it finds, and without that, it's just another reporting tool and it’s not really AI.

At its heart, AI is about making faster and better decisions. An AI product or service should include learning, reasoning and perception. If the algorithm doesn't learn and evolve, then it is not really AI and is more likely making decisions based on rules.

AI pioneer Arthur Samuel stated back in 1959 that machine learning is the "field of study that gives computers the ability to learn without being explicitly programmed." That mantra still holds today.

The key to AI is that it gains learning as it is used. If the algorithm is not continuously learning and delivering actionable insights, it is not really harnessing the power of true AI.

There is a wide array of products claiming to use AI. But before you invest, be sure it meets the tests described here. Otherwise, rather than helping your organization gain a competitive advantage by integrating AI, you run the risk of diverting budget to yet another technology product that over promises and then fails to deliver.

And that would not be an intelligent approach.

About the Author

Antony Edwards is the COO at Eggplant. He studied computer engineering at the University of South Wales, Australia, then started his career as a developer in Sydney before joining IBM Research in New York. After relocating to London, Antony joined mobile operating system builder Symbian, moving from system architecture to eventually becoming a VP and a member of the company's executive team. Before joining Eggplant, he held the position of CTO with a major U.S. online entertainment company.

 

EnterpriseAI