Advanced Computing in the Age of AI | Saturday, April 27, 2024

New Relic, Mona Labs Partner to Provide Deep Monitoring for Production AI Workloads 

New Relic’s cloud-based software stack observability platform is being combined with AI monitoring from Mona Labs to help enterprises better monitor, analyze and troubleshoot their growing production AI workloads.

The new partnership, which was announced Oct. 25, means that enterprises will be able to explore production machine learning data which is created by Mona Labs’ monitoring and take actions on it within the New Relic One platform.

As more enterprises increase their use of AI in production, they will need these kinds of observability and monitoring services so they can ensure that their systems are working properly, according to the companies.

Enterprises that already use New Relic will be able to view dashboards and see insights from Mona Labs that can quickly detect anomalies about data integrity issues and model performance issues within their production AI systems, significantly reducing business risks, the partners said.  The partnership is aimed at bridging the gap between data scientists and production teams so they can monitor and refine their machine learning operations, including data corrections, model retraining and other mitigations.

By integrating Mona services with New Relic’s platform, data be analyzed across the entire system to enable site reliability engineering team members and DevOps teams to better understand the performance of their data, according to the vendors.

The combined services automatically generate dashboards containing Mona insights about the production AI systems that are being monitored, while also allowing users to create custom Mona alerts to New Relic’s Alerts and Applied Intelligence centralized notification system. The system detects anomalies and reduces alert noise by correlating related alerts and incidents for users, the companies said.

“Ultimately, business leaders want to know if their AI investments are successful as measured by the business KPIs (key performance indicators), and the data teams need to correlate efficiently and accurately between changes in the business KPIs and changes in the data and models within the AI environment,” Yotam Oren, the CEO and co-founder of Mona Labs, told EnterpriseAI. “That is why AI monitoring is a foundational need for companies scaling their AI investments.”

Yotam Oren of Mona Labs

The message his company continues to hear from enterprises is that they need help monitoring their complex and hard-to-manage AI systems, he said.

“Most teams we met already had plans to introduce monitoring for their AI systems, and many already had in-house projects trying to build it themselves,” said Oren. “These were often well staffed, strong engineering teams, and yet an in-house monitoring project required unique expertise that was outside of their core. We learned that many teams were happy to do away with their internal projects due to the extensive insights within our platform that we were able to provide customers.”

Mona Labs executives talked with more than 100 businesses in late 2018 after the company was founded, he said, and it became apparent that as AI and machine learning were shifting from the research lab to production and driving businesses forward that data and business leaders needed better visibility into how their systems were performing and where there were issues that had to be addressed, said Oren.

That is where Mona Labs’ analytical engine fit in, giving enterprises anomaly detection that can be tailored to their individual needs, he said.

“Mona can automatically detect anomalous data or model behavior) segments,” said Oren. “The analytical engine can serve any AI use case on any stack. This is enabled by a unique configuration layer that enables AI teams to define and adapt their monitoring schema and control the powerful analytical engine. Another aspect of our flexibility is being able to easily integrate with other adjacent platforms, and established leaders such as New Relic recognized that. This collaboration with New Relic is one demonstration of how Mona can naturally extend customers’ existing observability deployments.”

Early in the development of Mona, the company’s engineers knew that data scientists within each business have unique requirements for its AI systems and that a one-size-fits-all approaches would not work, said Oren. “So, what we set out to do was to build a configurable platform,” he added. “We wanted to give teams an engineering arm that could do the heavy lifting for them.”

Mona is coming out of beta this year, according to Oren. Customers include Cape Analytics, Glovo, Fiverr, Gong and K Health across use cases including fraud detection, data-driven healthcare, e-commerce marketing optimization and revenue intelligence.

Guy Fighel, the general manager of applied intelligence and group vice president of product engineering at New Relic, said in a statement that that by partnering with Mona, the two companies were able to deliver a unified data observability platform that gives data science and DevOps teams unprecedented visibility into the performance of their machine learning models in production. “Monitoring the effectiveness of the production models while enabling collaboration between data science and DevOps teams will make it easier to develop, test, and monitor sophisticated ML models to ensure more relevant, meaningful customer experiences and maximize business impact,” he said.

James Kobielus, analyst

James Kobielus, an analyst and senior research director of data communications and management at data analytics consultancy TDWI, told EnterpriseAI that AI monitoring services for enterprises that are exploring AI are very useful.

“What partnerships such as this signal is the rapid convergence of DataOps and MLOps pipelines in the business world,” said Kobielus. “Without tools to continually monitor data and ML model quality issues in a unified fashion, AI applications are at risk of losing their predictive ability in production environments. Lack of strong observability tooling—such as dashboards for visualizing trends, issues, and anomalies in the DataOps/MLOps pipelines--also increases the vulnerability of AI applications to performance issues that might stem from myriad issues in those processes.”

In addition, the ability of enterprises to be able to cleanse bad data or retrain decaying models in real-time as they are triggered automatically by issues discovered by observability tools “is a necessary step in the industrialization of a converged DataOps/MLOps pipeline,” said Kobielus. “Observability tools are the key to maintaining a top-performing infrastructure for analytics and other data applications.”

In June, explainable AI vendor Fiddler AI secured $32 million in Series B funding for its approach in making AI easier to use for enterprises. The company’s platform uses explainable AI, which takes the complicated processes behind the scenes and brings in a centralized system to monitor, analyze and explain ML models automatically.

EnterpriseAI