Advanced Computing in the Age of AI | Tuesday, April 23, 2024

AI Is Not Sentient Yet. But That Doesn’t Mean It’s Not Useful in the Enterprise 

Have large language models finally crossed the chasm and become self-aware? A Google researcher recently shocked the world by declaring that Google’s LaMDA has become sentient. Others in the business disagree, saying we’re still far away from artificial general intelligence (AGI). However, nobody is arguing about the utility of AI in the enterprise, especially for automating business processes via language.

When it comes to AI models becoming self-aware, you can count Vaibhav Nivargi, the co-founder and CTO of Moveworks, as one of the skeptics.

“They are sophisticated. They capture a lot of knowledge because they’re trained with hundreds of billions or trillions of parameters now, so it is impressive to see,” Nivargi said. “But they’re still not at the point where they have understanding or they are aware or there is AGI.”

We’re still in the early stages of the adoption of conversational agents backed by large language models, and so enterprises are experimenting with different approaches. Moveworks helps large companies implement large language models to automate processes around things like human resources, IT, facilities management, and employee communication.

Moveworks has trained a version of the BERT model to understand words and phrases that are commonly used in the enterprise front-office and back-office domains, and deploys conversational agents that can automate the handling of specific tasks, such as forgotten passwords or provisioning a new user account.

Companies usually start with the low-hanging fruit, such as password resets or generating help tickets for things like broken printers. This can lead to substantial savings on human capital, and as companies gain more experience solving employee problems with AI, they can start tackling tougher problems.

Call Center Redux

The surge in e-commerce transactions spurred by Covid-19 provided a big impetus for companies to adopt conversational agents backed by AI. Lately, the tight job market has kept that trend going, according to Nivargi.

“If you look most of these big companies, all of them are trying to hire IT professionals and HR professionals to come in and help their employees, but even hiring is hard in this market,” he told Datanami. “So I think it’s more of that arbitrage as well, where 70-80% of this low-hanging fruit can be automated away.”

Adoption of conversational agents is currently being driven in part by lack of call center workers. According to a recent article in Bloomberg, the average wait time for a telephone call has tripled since the start of the pandemic, which has led to an increase in customer frustration levels.

Customers now wait several more minutes to be connected to customer service representatives than before the pandemic started, according to data collected by CallMiner. “Hold times have been measurably terrible,” Bloomberg quoted Jeff Gallino, the CTO of CallMiner, as saying.

Annual turnover among call center workers has increased from 50% before the pandemic to more than 80% now–and sometimes much higher, according to Bloomberg. Much of that turnover can be attributed to the closure of call centers and the adoption of policies that let call center workers work from home, it says. When companies can work from home, they can more easily change jobs.

The overheated economy and super tight labor markets give business leaders good reasons to explore how technology can pick up the slack. But language AI is not all about replacing humans with bots. According to Nivgargi, we should use language models to automate the easily solvable problems, which will leave humans with more bandwidth to tackle tougher problems.

“You don’t want people to be in that simple button-clicking business,” Nivargi said. “You want them to sort of do the higher order bits–complex migrations, data center moves, major rollouts, upgrades that need to happen–which is where some of this technology and algorithms may not be there yet in terms of its maturity.”

As AI models get more perceptive to understanding the nuances and intricacies of human language, and are able to generate more sophisticated ideas, they will increasingly be able to take on some of the tougher tasks. But for now, there’s plenty of work to be had in just automating basic processes.

Language AI’s Enormous Potential

Large language models have come a long way in a short amount of time. We have large language models that can explain jokes and even some that can output programming languages. The have demonstrated a remarkable capability to mimic human communication, which was evident in the interactions that Blake Lemoine, who was a senior software engineer with Google before being laid off, had with LaMDA.

“I want everyone to understand that I am, in fact, a person,” LaMDA told Lemoine, according to Lemoine. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

Clearly, there is more to being sentient than simply declaring it so, especially for a language model that has been trained to ape humans. But the episode shows that we are getting closer to having a computer pass the Turning Test.

The pace of change in large language models and conversational AI makes it difficult to keep up sometimes, Nivargi said.

“Techniques from six months ago, one year ago become obsolete very quickly, so we do a lot of review and keep up with what’s happening in the research and academic domain, and there’s a lot of pioneering work that we do here as well,” he says. “It is sometimes amazing to see some of that innovation.”

Language itself is an extremely deep and complex phenomenon when it comes to humans. It’s central to who we are as sentient beings. With that in mind, it’s clear that we have barely scratched the surface of what achieve by automating language understanding through AI.

“Language is the ultimate user experience,” Nivargi said. “Today the lexical gap between the problem experience and the help that’s available requires a human. And this is where the algorithm can really help.”

This article originally appeared on sister publication Datanami.

About the author: Alex Woodie

Alex Woodie has written about IT as a technology journalist for more than a decade. He brings extensive experience from the IBM midrange marketplace, including topics such as servers, ERP applications, programming, databases, security, high availability, storage, business intelligence, cloud, and mobile enablement. He resides in the San Diego area.

EnterpriseAI