Advanced Computing in the Age of AI | Thursday, April 18, 2024

Why It’s Time for Manufacturing Engineers to Embrace AI – and Swim Outside Their Lanes  

Everyone knows the total amount of business data has grown exponentially in recent years. But, did you know that manufacturing plants and other facilities expect to generate two to six times more data in the next two years? Correspondingly McKinsey & Company has predicted the value of the Internet of Things (IoT) market for factories will reach between $1.4 and $3.3 trillion by 2030. That’s driven by a massive expansion of the IoT data ecosystem that underlies plant automation.

This data is complex, time-defined, and unstructured. It’s coming from all your control systems – sensors measuring dynamics, flow, sound frequencies, temperature, pressure, oil, and vibration – as well as computerized maintenance management systems (CMMSs) and maintenance logs. Video. Audio. Images. Text. All of it pouring in, in real time. All needing to be collected, processed, and analyzed at very high speeds.

How do you tame that deluge? No human has the computational power to wrap their mind around such a massive inflow of information, let alone instantaneously. Yet in this era of exponentially growing complexity and data, companies can’t afford not to. That’s why AI and machine learning (ML) are becoming increasingly vital tools for manufacturers seeking to improve their operations’ efficiency, reduce breakdowns, and forestall catastrophic failures.  Staying competitive in manufacturing means not missing the chance to get on the train known as Industry 4.0.

But superfast machines aren’t enough. What’s needed is a holistic picture of your operations that enables manufacturers to fix the right problem at the right time, and a more evolved human/computer interaction. Today, there’s a new kind of reliability engineer – fluent in both mechanical and process data – who can enable it.

Tackling the unknown unknowns with multivariate analysis

Manufacturing processes are complicated. The origins of breakdowns, slowdowns, and deterioration rarely fit into a single information silo. They almost always entail many developments, including “unknown unknowns,” or elements of a machine’s work that plant managers would never consider under normal conditions.

Often the culprit is variability, which reflects inconsistent manufacturing operations. Variability can undercut reliability, increase costs, degrade quality, and lead to failures. Moving parts and chemical reactions, for instance, shouldn’t vary from day to day in the fabrication of the same product. The challenge is that the incipient causes aren’t always obvious. It would take a sizable team of even the most experienced reliability engineers countless hours to pull, sort through, match, and analyze a cache of data to detect them – likely long after discovery could have prevented a problem.

Consider a single compressor blade that gives out. How do you predict or prevent it when you can’t visually see it? You need to get down to the root cause, because degradation doesn’t usually happen in one shot. You may notice that vibration has accentuated a subtle imbalance, causing bearing wear. But what caused (or failed to prevent) the imbalance in the first place?

If you had more information on the blade’s thermodynamic efficiency, for instance – and/or more ability to analyze your process data – you could drill down deeper, find the source of the original problem, and feed that data into a virtuous cycle of predictive maintenance, thanks to this process of multivariate analysis.

AI: part of the puzzle

Artificial intelligence and ML are invaluable tools for taming variability and flagging issues before they become problems. 2nd Generation AI has been good at pattern recognition – in other words, things that have already happened. But that’s only half the battle: you want to detect latent patterns you don’t know about and incorporate insights about what might happen. You want to be able to expect the unexpected and have the foresight to see into the future, rather than looking back to investigate seemingly isolated problems that just somehow occurred.

We’re now entering the era of third-generation AI, which uses what’s called unsupervised or semi-supervised ML – that can ingest extremely large data sets and help shed light on the “unknown unknowns.” These techniques are enabling manufacturers to move from the world of simple pattern recognition to one where you can clearly define and “train on normal” – or understand and work from what constitute normal operating conditions and find deviations from normal, whatever they are.

While AI and ML are essential, they’re only part of the puzzle, however. They can’t work to their full potential without people — particularly engineers who know more about production processes than machines ever could. Domain expertise and human intervention are just as critical to the safe and efficient operation of today’s industrial plants as hyper fast number-crunchers. It’s the combination of the people and AI that makes the process work.

The architecture for running at scale is simple: you take in all your data — physical sensors, process data, and enterprise asset management – and run it through analytics engines, including AI/ML, physics, and failure modes and effects analysis (FMEA). These feed the dashboards and decision engines that then get fed back into the algorithm to further improve know-how and reliability. The engines can give an engineer enough time and information to stop a catastrophic failure.

Further, these engines can create a data-driven historical record that identifies anomalies in the past and reveals hidden patterns that connect the dots, helping to locate the root cause and forecast (and therefore prevent) further events down the line. Being able to focus on the conditions of a plant in the present is far more beneficial to maintaining operations than having to head blindly into a complicated investigation of past issues. Plants can get hamstrung by looking at data of events that already happened – as opposed to data that’s in line.

Citizen data scientists are the new multidisciplinary heroes of manufacturing

If plants are evolving toward more automation, and AI is evolving toward more prevention, then reliability engineers must evolve as well. Long accustomed to swimming inside their lanes of expertise – be it vibration, oil analysis, infrared, motor currents, and so on – today’s engineers must now embrace the new world of multivariate analysis and become well-versed in data science.

The good news is that an engineer’s mindset is one of continuous learning and adapting. Many have already mastered digital tools such as Excel macros and Python to explore data and its correlations. Mastering AI and machine learning is the natural next step – along with a willingness to let AI and ML do the heavy lifting on analyzing the right data and answering the right questions.  We’re seeing this happen not only with the maintenance side, but also the process side of plant operations as they see the advantages of working with a tool that brings everyone to the table.

More than just learning a new skill, today’s citizen data scientists must become truly multi-disciplinarian – more data-aware of both mechanical data and processed data. Manufacturers today are typically using only about two percent of the data available to them about their operations. To do their jobs optimally, these professionals must be empowered to work from, and within, a holistic picture of their data – one that covers every aspect of their manufacturing operations, materials, and processes, freeing people from having to detect problems to now spend time fixing them instead.

It’s time for engineers to swim – and thrive – outside their traditional lanes.

About the Author

Dominic Gallello is CEO of SymphonyAI Industrial. He is a longtime, multi-company CEO and executive in engineering vertical software solutions, including MSC Software, Autodesk, Macromedia and Intergraph Japan. He has led three successful public and private software companies in the past 17 years, resulting in $1.3 billion in exit value, with an average increase in value of more than 300 percent. He was named a SaaS Top 50 CEO in 2018 and brings a track record of company culture awards and more than fifty product awards, including two R&D 100 awards.

EnterpriseAI