D2iQ Releases Latest Version of Kaptain AI/ML Platform
This new version includes some significant firsts:
- Kaptain AI/ML 2.1 is the first smart cloud-native platform to enable Nvidia GPU Container Catalog (NGC) containers to be launched directly from Kubeflow, empowering developers with pretrained, best-in-class GPU-optimized models for greater accuracy in production.
- Kaptain AI/ML 2.1 features the first-ever seamless integration of Kubeflow and MLflow, giving users metadata tracking and visualization that enables improved performance models and the tracking of experiments directly from their notebooks. The integration means data scientists no longer have to choose between Kubefllow and MLflow.
- Kaptain AI/ML 2.1 integrates seamlessly with DKP 2.3, the industry's leading cloud- native platform. This integration enables enterprises to standardize their infrastructure, running ML pipelines and other workloads on a single enterprise-ready platform.
- Kaptain AI/ML is the only Kubeflow platform to eliminate all critical Common Vulnerabilities and Exposures in all components, highlighting security as a priority. In addition, Kaptain AI/ML 2.1 includes stronger identity provider integration. When combined with the military-grade security features in DKP, Kaptain AI/ML 2.2 provides an exceptionally secure AI/ML pipeline.
The new release also enables users to run Kaptain AI/ML workloads on Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) and Microsoft Azure Kubernetes Service (AKS), extending deployment options while further simplifying artificial intelligence (AI) and machine learning (ML) operations. Kaptain 2.1 continues strong support for air-gapped environments.
Additional key features of Kaptain AI/ML 2.1 include simplification of the user interface and management tasks. Customer-led enhancements, including Kaptain AI/ML's simplified user interface, accelerate time-to-value and operational success for AL/ML workloads.
"While more organizations are adopting Kubernetes to scale workloads in production environments, the growing complexities and lack of technical skills are holding back the full potential of AI/ML deployments," said Deepak Goel, Chief Technology Officer at D2iQ.
"DKP 2.3 and Kaptain AI/ML 2.1 enable data scientists to harness the scalability and flexibility of Kubernetes without having to struggle with its technological challenges," Goel explained, adding that, "The new updates continue our commitment to simplifying and expanding AI/ML infrastructure and is the next step toward making workloads easier to manage across the Kubernetes distributions that are the foundation of future innovation."
Reduced Complexity Yields Higher AI Success Rates
Kaptain AI/ML is an enterprise-ready distribution of open-source Kubeflow that enables organizations to develop, deploy, and run AI/ML workloads in production at scale in a consistent and repeatable manner without sacrificing security or compliance requirements. By simplifying AI/ML operations, Kaptain AI/ML frees data scientists to focus on business objectives rather than configuring complex underlying Kubernetes infrastructures.
Consistency and Governance Bring Improved Operations and Security
Many AI and ML efforts began as "skunk works" efforts, with data scientists needing to buy, build, and provision their own clusters for running their pipelines. By running Kaptain AI/ML on DKP clusters, enterprises are able to leverage economies of scale and take advantage of the security and consistency inherent in the DKP platform. Increasingly, enterprises are consolidating their Kubernetes efforts to ensure they are consistent and secure no matter where they are running. DKP provides that consistency, and running Kaptain AI/ML 2.1 on DKP 2.3 extends that security and consistency to AI/ML workloads.
Overall, the new Kaptain AI/ML 2.1 capabilities provide more flexibility, choice, and increased productivity in Kubernetes environments. Kaptain AI/ML 2.1 is now generally available. For more information, see www.D2iQ.com.
D2iQ is the leading provider of enterprise-grade cloud platforms that enable organizations to embrace open-source and cloud-native innovation while delivering smarter Day 2 operations. With unmatched experience and driving some of the world's largest cloud deployments, D2iQ empowers organizations to better navigate and accelerate cloud-native journeys with enterprise-grade technologies, training, professional services, and support. Whether you are deploying your first Kubernetes workload, optimizing your business analytics with Apache Spark or Jupyter, or looking to educate your developers on the benefits of cloud native, D2iQ has the expertise, services, and technology to enable you to succeed. D2iQ is headquartered in San Francisco with additional offices in London and Hamburg.