Covering Scientific & Technical AI | Saturday, November 9, 2024

Machine Learning Pioneer Andrew Ng Shares Advice on ML Deployment, Hiring and Data Integrity 

AWS Machine Learning Summit 2021 – If business IT leaders want to use AI and machine learning to help empower and move their enterprises forward, they could not do much better than getting some direct advice from ML pioneer Andrew Ng.

That is just what happened June 2 (Wednesday) at the virtual, one-day AWS Machine Learning Summit, where Ng participated in a fireside chat focused on the future of ML, while doling out valuable suggestions about finding workers skilled in ML and ensuring that ML projects will be successful.

Hosted by Swami Sivasubramanian, the vice president of AI and machine learning at AWS, Ng shared tips for bringing ML from proof of concept to production, the importance of quick win first projects to get momentum going and the wisdom of ensuring that executives charged with mapping and carrying out ML strategies are given adequate education about the technology.

While ML is certainly being used more broadly today by enterprises, it is not yet a mainstream technology with a deep well of needed knowledge for business leaders, said Ng.

“I think we are still on our way,” Ng told Sivasubramanian at the event. “But the adoption of AI and machine learning hasn't stopped yet. And even though AI has created tremendous value and a lot of people are talking about it, I think that the most exciting activities are still yet to come.”

Ng has quite a background in the field. He is the founder of AI education company DeepLearning.AI, the founder and CEO of industrial AI platform company Landing AI and the co-founder and chairman of online learning vendor Coursera. Ng, who is also the managing director of the AI startup incubator AI Fund, formerly worked at Google where he was the founder and lead of the Google Brain deep learning project and formerly was the chief scientist for the AI group at China’s Baidu Inc. He is also an adjunct professor of computer science at Stanford University, where he leads a research group on AI, ML and deep learning.

Here are some of the edited highlights from the fireside chat between Ng and AWS’s Sivasubramanian:

Andrew Ng, left, speaks with Swami Sivasubramanian during the virtual AWS ML Summit.

Swami Sivasubramanian: At AWS, we have more than 100,000 customers already using machine learning. But one of the things we constantly see are CEOs and CTOs asking about how to get started with machine learning. Can you share some advice for CEOs and CTOs and the top leaders in companies on how to be successful with ML?

Andrew Ng: I published online a document called The AI Translation Playbook, that leaders and executives of organizations going through the digitization and AI translation journey will find useful. The number one mistake that I see organizations make is taking too long to get started or taking too long to plan it out. I had CIOs come and say things like ‘my data is a mess, and my digital silo completely needs to be cleaned up.’ It turns out that everyone has messy data in their silos, basically.

Now, this is true for pretty much every company – most organizations today have enough data to get started, I find the companies are better off, jumping in, get your hands dirty, deliver a quick win a smaller project and use the learnings of that to grow to bigger and bigger projects over time. It is important to start a small pilot project for quick wins. And it also starts with building a team, providing broad-based training to the organization, including the executive and execution levels and formulating a thoughtful strategy.

One of the most interesting things I learned is that many CEOs said they would follow that strategy … and then go to their board to get approval and then execute. But the boards pushed back against the explicit requests the CEOs wanted. For a lot of companies, until you have done the first few projects, the strategies that companies come up with can be a bit theoretical, academic, like the boards read it in a paper.

Sivasubramanian: What do you think are the key performance indicators to measure success of an ML project? Many CIOs or teams prematurely give up or they do not know how to measure whether are they on the right track or not. Can you share some insights on what you have seen?

Ng: We're in the early phase of the development of AI as an engineering discipline. It took us a long time to figure out how to make software aging a little bit more predictable and I think we are in a much earlier part of that journey for machine learning. KPIs and metrics, they are very project specific.

One thing I often see on many projects is if you are working on a project for the first time, a completely novel application, it can be difficult to make up target metrics for AI teams to succeed. It is difficult to establish some baseline level of performance that is reasonable as a project that the team is working on. I think that you just have to build that first prototype system, build it quick and dirty, build an initial system just to get a sense of what might be possible.

Sivasubramanian: What are some of the major challenges that are stopping from ML becoming more mainstream?

Ng: A lot of AI projects have a proof of concept to production gap problem. With a proof of concept, a lab demo, the data scientist or engineer gets something to work on the laptop and then says they achieved good accuracy on a test set. But it turns out that when you reach that milestone of the lab demo, there is still a lot of work needed to take that to production environments. There is all the software that needs to be written to make it match the data and make it maintainable. We now see a path to solving these problems but have to get through it.

Sivasubramanian: How would you advise companies to get access to the right amount of ML data training? Data is the fuel for machine learning at the end of the day, and this is one area where we see many companies first finding lot of starting difficulty.

Ng: My typical advice is to just jump in and just start doing something with a small data set. You can often go back to collect more data. I find that for many practical applications, rather than a model-centric approach where you hold the data fix and try to improve the code, it is more useful to hold the code fix and work on iteratively improving the data. This is a nascent part of MLOps that I do not think anyone really has great tools for yet. I think this whole field of data-centric AI developers still needs to be fleshed out, so I am excited about that as a major direction in AI.

Sivasubramanian: That makes sense. Our own experience with machine learning, and we see this with many customers, is that many scientists and engineers spend more time on data processing validation and preparation than tuning the algorithms.

Ng: We even say it like a joke, that 80 percent of data science is data training. I think preparing the data is a core part of our job. And having to accelerate the efficiency with which you can do that will be critical to helping us build and deploy machine learning systems.

Sivasubramanian: One thing you mentioned was MLOps competency. Share for a broader audience why machine learning Ops or ML Ops is so fundamental to this journey?

Ng: There's a lot of excitement on the ability to train an AI model in the lab and then publish papers and generate good results. But when people look at the lifecycle of a machine learning project, a lot more is needed than training the model. There is scoping of the project, deciding what to do, what not to do. There is collecting the data … and ensuring high quality data and much more Then you get back to the AI, push this into production, then audit performance for any performance or fairness issues.

I think that workflow is a lot of work for which the tools to manage that process are still in the relatively nascent stage and are being developed. But if we can build those tools, then we can empower a lot more people to build and deploy and maintain and effectively use machine learning systems.

DevOps is an important discipline for building scalable software systems. And trying to contribute to the emerging discipline of MLOps, there is one very interesting thing – for AI systems, it is not just code. For code, we have a DevOps discipline. But AI is code plus data. And the interesting thing about data is that with MLOps, to manage the consistently high quality flow of data through the project, MLOps needs to be more iterative. They need to work together.

Sivasubramanian: What about skills training to be ready for ML going mainstream. What things do you recommend that a hiring manager or CIO be aware of to build a high quality data science team?

Ng: Many companies hire or try to hire great data scientists. But the world today does not have enough ML engineers and data scientists. So, I see more and more companies have good results taking their domain experts, who already know their business well, and give them a little bit of knowledge and training about how to use data science tools and machine learning tools.

To find interesting use cases in machine learning and data science you need that interdisciplinary knowledge of the technology, as well as the business application, and you can build this into a new team that works well. They can be a source of great ideas.

Sivasubramanian: You are among the most prominent leaders in education for ML, and you also co-founded Coursera. How can we make the education for ML easier and enable companies to train their talent? At Amazon six years ago, we started our own machine learning university, where we said that we are going to introduce the basics of ML from ML 101 to Natural Language Processing to computer vision and everything else. I do not think every company needs to start its own ML university because there is so much information available online, but what would you recommend on the ML training front?

Ng: With the rise of online digital content, it is easier than ever for a CEO or chief learning officer or head of engineering to find content to level up the workforce. I am excited about the work that AWS and DeepLearning.AI has been doing as well to create content on data science.

It is also important to get the executive level people some non-technical AI knowledge because that makes it more successful. It makes it easier for the executives to collaborate with the technical teams responsible for these projects. It is easier than ever before for the executive leadership team to find the right content so they can very efficiently skill up a workforce.

Sivasubramanian: One final question. I always wonder, if I were an engineer graduating from college today, what advice would you recommend?

Ng: I find that the most skilled individuals in AI are T-shaped, with a broad base of technical knowledge together with real depth in some areas. Coursework tends to be a very efficient way for individuals to gain that broad base of technical knowledge. And then beyond a certain point, to gain that deeper knowledge, you have to jump in and do project work. We all want to build a project that will help millions or tens of millions of people and create massive economics while doing good for lots of people. And hopefully, many people watching this will get there, or maybe they are there already.

And community is important too, I think we are all shaped by the people around us, so find like-minded people to share knowledge with each other.

Sivasubramanian: That is great advice. I think we are in early days of the machine learning revolution. It reminds me so much early days of cloud computing when I joined Amazon 15 years ago. It still feels like there is so much invention to be done, especially for pioneers like you and your companies.

Ng: Thank you. But, please let me tell one story before we go. Before the COVID-19 pandemic, I attended an event in California called Maker Faire, and I saw this display by a student who had flown from India to the United States to display his robot. He built a robot that uses a camera to take pictures of plants and used machine learning to see if the plants were diseased.

I looked at this project and I thought, wow, if a Stanford PhD student had done the same project five or six years before, it would have been a perfectly fine project. So, I asked the student how old he was. ‘I am 12 years old,’ he said. So we now live in a world where the ability to do things that was cutting edge not too long ago is available to a 12-year-old with access to computers, which I know not everyone has. But I feel like, with the availability of online courses, I hope we can build a future where there are tons more people building all kinds of systems in every sort of imaginable industry, and together they will create a lot of value for everyone.

AIwire