Advanced Computing in the Age of AI | Thursday, March 28, 2024

MIT-IBM Watson AI Lab Tackles Power Grid Failures with AI 

Next time your power stays on during a severe weather event, you may have a machine learning model to thank.

Researchers at the MIT-IBM Watson AI Lab are using artificial intelligence to solve power grid failures. The manager of the MIT-IBM Watson AI Lab, Jie Chen, and his colleagues have developed a machine learning model that works to analyze data collected from hundreds of thousands of sensors located across the U.S. power grid.

The sensors, components of what is known as synchrophasor technology, compile vast amounts of real-time data related to electric current and voltage in order to monitor the health of the grid and locate anomalies that could cause outages.

Synchrophasor analysis requires intensive computational resources due to the size and real-time nature of the data streams the sensors produce. There can be difficulty with quickly extracting data for anomaly detection, or the “task of identifying unusual samples that significantly deviate from the majority of the data instances,” as defined in the researchers’ paper.

The ML model can be trained without annotated data on power grid anomalies, which is advantageous because much of the data collected by the sensors is unstructured.

“In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” said Chen in the MIT News article.

To develop this ML model, the researchers first defined an anomaly as a low-probability event and estimated probability density by defining the power grid dataset as a probability distribution. This allows detection of low-density values, or low-probability events, which correlate to anomalies.

An example of a simple Bayesian network.

Probability distribution is tricky with such complex data, and the researchers used a deep-learning model called a normalizing flow to assess the probability density. The normalizing flow model is scaled using a Bayesian network, which is a graph capable of learning how the sensors are structured and how they interact. A graph structure allows for pattern recognition in the data, thus more accurate anomaly detection.

According to MIT News, the “Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate.” The result is that the ML model can independently learn the graph because of the graph’s simplification of probability.

The researchers are interested in further studies on how these models can be scaled for use alongside larger and larger graphs while implementing other approaches besides anomaly detection. Because of its adaptable methodology, this technology could be applied to other areas with complex data collection and analysis, including those related to traffic patterns and monitoring.

“Once the model is deployed, it would continue to learn from a steady stream of new sensor data, adapting to possible drift of the data distribution and maintaining accuracy over time,” said Chen in the MIT News article.

To learn more about this research, read the original article here.

EnterpriseAI