Explainable AI in Deep Reinforcement Learning Models for Power System Emergency Control

Artificial intelligence (AI) technology has become an important trend to support the analysis and control of complex and time-varying power systems. Although deep reinforcement learning (DRL) has been utilized in the power system field, most of these DRL models are regarded as black boxes, which are...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on computational social systems Vol. 9; no. 2; pp. 419 - 427
Main Authors Zhang, Ke, Zhang, Jun, Xu, Pei-Dong, Gao, Tianlu, Gao, David Wenzhong
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Artificial intelligence (AI) technology has become an important trend to support the analysis and control of complex and time-varying power systems. Although deep reinforcement learning (DRL) has been utilized in the power system field, most of these DRL models are regarded as black boxes, which are difficult to explain and cannot be used on occasions when human operators need to participate. Using the explainable AI (XAI) technology to explain why power system models make certain decisions is as important as the accuracy of the decisions themselves because it ensures trust and transparency in the model decision-making process. The interpretability issue in DRL models in power system emergency control is discussed in this article. The proposed interpretable method is a backpropagation deep explainer based on Shapley additive explanations (SHAPs), which is named the Deep-SHAP method. The Deep-SHAP method is adopted to provide a reasonable interpretable model for a DRL-based emergency control application. For the DRL model, the importance of input features has been quantified to obtain contributions for the outcome of the model. Further, feature classification of the inputs and probabilistic analysis of the outputs in the XAI model is added to interpretability results for better clarity.
ISSN:2329-924X
2373-7476
DOI:10.1109/TCSS.2021.3096824