AI Virtualization and Orchestration Startup Run:AI Captures $30M in Series B Funding
Upstart AI vendor Run:AI dove into AI by creating Kubernetes-based software to help customers get more production out of their existing AI infrastructure investments. Run:AI’s software is an orchestration and virtualization layer that pools together compute resources so they can be instantly allocated on demand as needed.
The promise of Run:AI’s AI technologies has resulted in a new $30 million Series B round investment from several investors, including by Insight Partners and previous investors TLV Partners and S-Capital.
The idea for Run:AI was simple – AI data scientists typically are allocated a certain number of GPUs for their work, which can leave them running out of critical processing power when they need it – even as plentiful processing power stands by idling elsewhere in a customer’s data center, said Fara Hain, the company’s vice president of marketing. By bringing it all together and making it available through the software layer, orchestration and virtualization solve the dilemma, said Hain.
“The virtualization piece pulls it all together and pools it so it can be allocated as needed,” she said. “And it can be assigned priority.”
Run:AI is designed to work with GPUs from any vendors, but so far it is built to work with GPUs from Nvidia, which is the market leader. It does not yet work with GPUs from other vendors but they are expected in the future, said Hain.
The company’s deep learning virtualization platform has been available to customers since early 2020 and has been adopted by many large enterprise users in the financial, automotive and manufacturing industries, according to CEO Geller.
Money to Grow
The latest $30 million investment round, which follows an earlier $13 million round, will be used to significantly grow the company’s research and development team, its sales team and its marketing efforts, said Geller. “Most of the money will go to building the foundation and executing on the go-to-market strategy.”
Geller called the company’s technology a resource management layer that helps maximize the utilization of AI hardware for users. “And with that we help our customers to bring solutions faster to market since they have better access to compute power,” he said.
The platform aims to help relieve the bottlenecks that many companies hit as they try to expand their AI research and development, according to the company. The enormous AI clusters that are being deployed on-premises, in public cloud environments and at the edge often can’t be fully utilized due to segmentation and compute limitations built into the systems. That’s where Run:AI’s software can be a large benefit. The Run:AI software layer is tailored to the needs of AI workloads running on GPUs and similar chipsets.
The company claims that its Kubernetes-based container platform is the first to bring OS-level virtualization software to workloads running on GPUs by automatically assigning the necessary amount of compute power – from fractions of GPUs, to multiple GPUs, to multiple nodes of GPUs – so that researchers and data scientists can dynamically acquire as much compute power as they need, when they need it.
Analysts See Promise in Run:AI’s Approach
Karl Freund, a senior analyst for machine learning, HPC and AI with Moor Insights & Strategy, told EnterpriseAI that Run:AI’s technology is intriguing for enterprise users.
“This is a great example of an emerging trend we are beginning to see: AI developers need tools to help manage and automate the workflow and the data,” said Freund. “These tools will help mature the development process and protect and leverage the assets companies are building.”
Another analyst, Peter Rutten, research director for infrastructure systems, platforms and technologies with IDC, agreed.
“I think Run:AI will have legs going forward,” said Rutten. “They’re essentially in the early stages of what VMWare did for CPUs. Virtualizing GPUs and increasing their utilization will really help businesses reduce cost and gain productivity.”
Even Nvidia foresaw this approach by partitioning its latest A100 GPUs to fend off virtualization, he added, “but that’s a hardware approach, and a virtualization layer is more effective as software. Also, as software it becomes practical for other hardware, like FPGAs, or other types of AI processors.”