Degradation State Assessment Modeling Using Causality Discovery

In order to solve the problem of equipment degradation state assessment, one idea was to use data-driven method to build equipment health state model and evaluate equipment degradation based on residual. However, most current data-driven models revealed the correlation between condition monitoring v...

Full description

Saved in:
Bibliographic Details
Published in2022 Prognostics and Health Management Conference (PHM-2022 London) pp. 545 - 548
Main Authors Feng, Chen, Liu, Xiaochen, Bi, Shulei, Kang, Jian
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.05.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In order to solve the problem of equipment degradation state assessment, one idea was to use data-driven method to build equipment health state model and evaluate equipment degradation based on residual. However, most current data-driven models revealed the correlation between condition monitoring variables and equipment state rather than the causal relationship, so the rationality of the model construction lacked explanation. Therefore, causality discovery algorithm was introduced in this work to find variables that were causally related to degradation state to build a state model and improve the interpretability of the model. In this paper, the COmbined Diesel eLectric And Gas (CODLAG) Propulsion system degradation dataset was used for experiments. The Fast Causal Inference (FCI) algorithm was used to discover the causal relationships among the variables, as shown in the causal graph. Based on the causal graph, 4 groups of variables were selected to train the Long Short Term Memory (LSTM) neural networks as models to assess the degradation state. The experimental results showed that those variables that had strong causal relationships with the equipment state were sufficient for the training of the model. And the trained LSTM neural network had good performance for the degradation state assessment. More importantly, the model trained by this way had better interpretability.
ISSN:2166-5656
DOI:10.1109/PHM2022-London52454.2022.00102