Usable AI: Putting the Right Amount of Human in the Loop
Most companies embarked on their AI journey thinking they would build large AI systems that would take over entire workstreams end-to-end. After several years of proofs of concept and experimentation, it is clear that AI is a fantastic prediction machine, arguably the best prediction machine we will ever have, but judgment requires human engagement. And with that realization enterprises are defining new ways of deploying AI.
Hybrid is the name of the game – and how it is played all comes down to the right mix of accuracy and risk.
Let’s take transactional finance processes for example. AI is deployed very successfully in many large corporations to extract information from invoices, compare and match purchase orders and invoices, and provide a recommendation on the payment that needs to be made. Transactional finance policies can put in place a threshold below which the payment gets made automatically, and if the payment is above the threshold, it defaults to a manual queue where a human applies judgment on the dollar amount. This hybrid approach allows finance departments to get to the right end point and make the AI solution better by using reinforcement loops. In some cases, where the accuracy is very high and the risk criteria is low – AI can completely automate it and humans are only involved in bettering the AI. In other instances, where the risks are higher – such as credit card fraud or cybersecurity, the human and AI interaction is very tight – and human analysts are involved in bettering the AI algorithm at every step. And when the threshold is even lower or the risk is even higher, or when empathy is most important, such as in health-related fields – AI is simply used as a prediction engine to augment the human decision process.
Human in the loop
Perhaps the largest challenge in applying AI in the enterprise is that no matter how great the computer vision, text extraction or pattern recognition algorithm is – the recommendation needs to be contextualized or nuanced based on how it will be used. For example, in pharmacovigilance – AI can easily be used to extract adverse events from doctors’ notes, phone recordings and social media posts; it can be used to spot a pattern in a large volume of health trend data; and it can automatically classify and report these adverse events to the regulators. But making decisions based on this data is risky and could carry a significant public health impact. So, it is not sufficient to run this process with an AI engine through a reinforcement loop to automatically promote the better model. Since the entire process needs to be regulatory compliant, enterprises should use two instances – one to run the current model and another one looking at the data to keep enhancing the model. Only when the time is right should you promote the new model.
Similarly, context and nuance are key when applying AI chatbots for internal HR processes. For example, we rolled out an AI solution for employee engagement. Pre-pandemic, we had layers of management trained by HR to get the pulse of their employee populations. During the pandemic, it was difficult to keep this process in place in a large population of 100,000 employees mostly working from home. So, we put in place an AI chatbot that asks questions in a non-judgmental and standardized way – the AI can tell us where the hotspots are and where the employees are doing well. The real value of the data is in understanding why in some countries, employees reach a low point six months after joining whereas in other countries the issue does not exist. The AI cannot figure this out on its own. HR teams need to apply human judgment and use their contextual knowledge of onboarding policies in each location. That is where operating model and nuance is very important.
Ultimately humans have to make the most of the information provided by AI and make a decision, often in a split second. For instance, in our partnership with Envision Racing, we refined an AI engine to sift through all the radio communications received during the race to remove all extra, irrelevant noise from multiple radio channels and feed the driver only with the relevant information for the race at that moment in time. This is incredibly helpful for the driver – who can focus all their attention on the racetrack, but ultimately, the driver still has the critical decision to make. It’s the partnership between AI and the human that makes the winning team.
So as the journey of AI continues, the acronym should really be used to mean augmented intelligence – not artificial intelligence. The issue for a CIO is not whether AI is usable. The answer to that question is an obvious yes. The issue is how to set up an operating and organizational model that leverages the power of AI in the right way for each individual business. And this includes how to operationalize AI, what are the people processes involved and what are the governance methodologies to implement in the right workflow. Indeed, digital is easy, transformation is hard. It is the people and process that make-or-break digital transformation. And training the humans is as important as training the AI.
Sanjay Srivastava is Chief Digital Officer of Genpact. In this role, Sanjay leads Genpact’s growing digital and technology businesses. He oversees the company’s offerings in artificial intelligence, analytics, automation and digital technology services. He also oversees the Genpact Cora platform, an artificial-intelligence-enabled platform that builds upon customer experience at the core and delivers industry-leading governance, integration, and orchestration capabilities across digital transformations.