8 Questions to Ask Before Embarking on AI
AI and ML are starting to transform business. Gartner estimates that AI technology will generate $3.9 trillion in business value by 2022. Soon, companies that do not invest in AI risk falling behind competitively. But having said that, organizations should not adopt AI just to stay on trend. It’s crucial to build an AI business case, assess the company’s AI readiness and create the program in the right way.
To achieve AI ambitions quickly, reliably and cost-effectively, here are eight questions every organization should ask.
- Do we have realistic expectations? AI and ML can deliver cost savings and revenue growth, but the ROI is not always immediate. AI can be resource-demanding, particularly in terms of compute and storage. While hardware is becoming more powerful and less expensive, if you have large amounts of data and anticipate large sets of deep learning experiments, you should expect relatively high upfront costs for the AI implementation. Starting with a proof of concept might be the best way to embark on AI since it will help you learn about the landscape -- such as development velocity, risk areas, opportunity areas, data pipeline issues, and applicable ML models. This can lead to realistic expectations and accelerated time to value through improved focus.
- What problem are we trying to solve? Ensure that you have a well-defined strategy that specifies why you are doing the project and for whom. In our experience, it is common to solve a few problems in parallel. The problems can be categorized as commercial, tactical and strategic. The commercial category is the business driver for tackling AI. This area should be the primary motivation for leveraging AI and ensures the program is on solid footing for continued investment. Tactical is concerned with blocking and tackling issues related to project success, such as where to start, what skills are needed and what infrastructure is required. Strategic is concerned with structuring the AI program in a way that leads to a virtuous cycle of successful AI model development and integration into existing systems.
- Do we have the right data? Data such as sales and marketing, HR and security will be the basis for teaching models -- they’ll learn to recognize patterns in the data and identify relationships between those patterns and the outcomes you’re looking for. Thus, it’s important to understand the requirements ML algorithms will place on your data -- completeness and quality in particular. These requirements will determine if additional data pipeline work is required before model training can start.
- Do we have in-house AI expertise? AI programs rely on specialized skill sets to create the technology that will drive automation, experimentation and business value. Data engineers are needed to ensure the quality and suitability of data and integrate it with AI infrastructure. Data scientists and ML engineers analyze the data, look for patterns, and develop the AI models, algorithms and neural networks that will consume the data and extract actionable insights. DevOps Engineers deploy and manage AI solutions in production. It’s crucial to identify gaps in your team’s capabilities and ensure they are addressed before beginning your AI journey.
- Do we have the hardware infrastructure? Capacity planning is an important element of AI initiatives, from workstations to compute clusters. ML and DL will place your infrastructure under additional strain. Careful analysis should be done to ensure throughput expectations are met. It’s vital to find out how much data needs to be processed, what are the data access demands (IOPS, network bandwidth), how many models need to be trained, how complex the models (layers, computational) are and how you’ll employ continuous learning strategies.
- Do we have a modern software infrastructure that leverages available hardware? Whether your hardware is on prem or in the cloud, ML requires effective use of it. You’ll need software that makes effective use of what’s available. Additionally, you’ll want a portable solution with the flexibility to train models on-prem or in the cloud, enabling hybrid and multi-cloud infrastructure strategies. This is why Kubernetes – an open-source container-orchestration system that automates deployment, scaling, and management of containerized applications – has proven to be the right infrastructure automation solution for AI. Kubernetes can efficiently use your physical resources, sharing them among users, and works equally well across multiple types of infrastructure, whether private or public, bare-metal, VMware, OpenStack, AWS, GCP, Azure, etc.
- Do we have a modern software stack that leverages the latest ML innovations? The AI world moves fast. It’s essential for organizations to: take advantage of the latest software and achieve high levels of productivity; ensure that new versions of software platforms can be rolled out safely, without impacting production DevOps pipelines; and leverage multiple versions of components in your technology stack. There are a few logistics problems that occur at scale as well, such as how to leverage all your hardware, achieving close to 100 percent utilization if necessary.
- Are we aware of common pitfalls? AI has many pitfalls, including data quality and difficulty identifying, monitoring and explaining bias in trained models. Knowing about these pitfalls helps avoiding them. They should on your project execution checklist and used as part of your project governance.
These eight questions can form the fundamentals of a successful AI program. All that remains is to take the plunge and start your own AI journey. The sooner you begin, the sooner your business can begin reaping the benefits.
Carmine Rimi is a product manager at Canonical, where he is responsible for the global AI/ML strategy, including Kubernetes.