Maybe GenAI’s Impact Will Be Even Bigger Than We Thought?
The level of hype around generative AI is off the charts, as we have covered here in Enterprise AI over the past year. The hype is so thick at times, you could cut it with a knife. And yet, there is still the potential that people could be underestimating the impact that GenAI will have on business. At least that’s what the heads of two GenAI software companies are saying.
As the President and Co-founder of Moveworks, Varun Singh has a bird’s eye view of how large language models (LLMs) are impacting the enterprise. The company develops a platform that allows customers to leverage GenAI tech to build chatbots and other types of applications. The company counts more than 100 Fortune 500 companies as customers.
While the GenAI field is moving fast, Singh doesn’t think people have bitten off more than they can chew. “I haven’t seen people trying to do too much, in terms of what’s expected of them,” Singh says. “So far, what we’re seeing is… people are still coming to terms with how powerful these models are.”
Moveworks uses LLMs like GPT-4 to create chatbots, such as an HR chatbot that answers questions about company benefits, or an IT service desk chatbot that can answer questions about IT problems. More recently, the company has been moving up the GenAI ladder by helping customers create GenAI co-pilots that can handle more advanced tasks.
How well these GenAI co-pilots are working has been a real eye-opener for Singh, who anticipates a lot of progress in this area in a short amount of time.
“I think right now people are still thinking about LLMs as working as agents, but within application boundaries,” he tells Datanami in a recent interview. “The next level of use cases that are emerging, that we have been doing for a while now, especially with our next generation Moveworks Copilot, is acting as agents across application boundaries, where you don’t have to even mention the agent experience.”
One Moveworks’ Copilot application was able to handle the responsibilities of 36 different human agents, Singh says. Provided with the correct plug-in to enterprise applications or data sources (Moveworks has more than 100 of them), the co-pilot is able to get access to the application, monitor how human agents interact with the app, and then recreate the tasks on its own.
“It’s completely insane in terms of its ability to discern and do actions across range of different applications and auto selecting the right plugins,” Singh says. “It’s working. And frankly, I don’t think that’s too much at this stage, in terms of how far you can push this technology.”
Moveworks gets down into the weeds with GenAI so its customers don’t have to. Its engineers poke and prod the various LLMs available on the enterprise market and from open source repositories to see where they’ll be a good fit for its customers. “We use GPT-4, but we also develop our own models,” Singh says. “We’re experimenting with Llama2. We’re fine tuning T5 and other open source models.”
GPT-4, for example, demonstrates tremendous capability in language understanding and generation, but it can increase latency and has questions around accuracy, so Moveworks uses its own models in some situations, Singh says. Each GenAI deployment typically involves multiple models, which Moveworks coordinates behind the scenes.
“The most important thing for customers is time to value, and the cost of getting to that value,” Singh says. “They don’t care if it was GPT3 or 4, or as long as the employee experience and the results [are there]. And the results they’re looking for is complete automation of the service desk.”
The potential shown by GenAI is vast, but we’re not even scratching the surface of what it’s fully capable of, Singh says.
“These models are very powerful, but we’re not good thinking deep enough about the utility of these models,” he says. “So the crisis is a little bit on the creativity front.”
Are We Underselling GenAI?
Arvind Jain, the CEO and founder of Glean, has a similar story to tell.
Jain founded Glean in 2019 to create custom knowledge bases that enterprises could search to answer questions. The former Google engineer started working with early language models, like Google’s BERT, to handle the semantic matching of search terms to enterprise lingo. As LLMs got bigger, the capability of the chatbots got even better.
“We feel that GenAI’s potential is even larger,” Jain says. “There’s big hype and there’s been some disappointments. But I think right now, given how people feel, I think the impact of GenAI is actually larger than what most people think in the long run.”
Jain explains that the reason for his GenAI optimism is how much better the technology has gotten in just the past five years. As the technology improves, it lowers the barrier to entry for those who can partake of GenAI, while simultaneously raising the quality of what can be built.
“Five years back, it was only companies like us who could actually use these models,” Jain says. “You had to actually have engineering teams. The models were not as end-user ready. They were sort of clunky technologies, difficult to use, that don’t work that well. So then you need engineers to do a lot of work to tune those models and make it work for your use cases.
“But that changed,” he continues. “Now large language models have come to a place where it’s gotten democratized in some sense. Now everybody in the company can actually solve data business problems using these models.”
If you want to build your own GenAI chatbot from scratch, it still takes engineering talent, Jain says, although anybody with the skills of a data scientist should be able to put it together. And if you want to build your own LLM model–well, that piece of tech is really off the table for the vast majority of companies, due to the immense technical skill required, in addition to huge mounds of training data and GPUs to train them.
But now that very powerful LLMs are readily available, engineering outfits like Glean can use them to build shrink-wrapped GenAI applications that are ready for business on day one. The core Glean offering is basically “like Google and ChatGPT inside your company,” Jain says. The company, which has 200 paying customers, also offers a low-code app builder that allows non-technical personnel to build their own GenAI apps.
“Companies should think of AI as a technology that they can use, that they can buy, that they can incorporate into their business processes, into their products without having to worry about ‘Hey, do I need to build talent to start building models,’” Jain says. “Very few companies need to actually build and train models.”
For every OpenAI, Google, or Meta that builds their own LLM from scratch, there will be many more companies like Glean that hire engineers and use the LLMs to build AI products that enterprises will use, Jain says. However, a handful of large enterprises may decide that they need to build their own GenAI products. Those enterprises will need engineering talent.
“Depending on the context, it’s going to require you to have a engineering team that’s going to be able to effectively use these large language model technology and some RAG-based platform like Glean,” he says. “You would need some engineering to actually incorporate GenAI technologies into your business processes and products.
“Then there are also going to be many situations where you can just go buy a product,” he says. “And for that, you can use the HR team. You don’t need to build an engineering team. You can just buy a product like Glean or like many other products like that and just deploy that and get the value of AI.”
The future is wide open for GenAI, Jain says, particularly for companies who will leverage the technology to build compelling new products. We’re just at the beginning of that transformation, he says. The early returns on GenAI investment are very good already, and the future is wide open.
“I honestly feel like the technology is continuing to surprise people. It’s moving fast. And we’re getting real value from it,” Jain says. “The applications go well beyond chatbot use case. This technology is quite broad.”
This article first appeared on Datanami.