Relation between prognostics predictor evaluation metrics and local interpretability SHAP values
•It is important to study the interpretability of prognostics models.•We analyze the interpretability of Linear Regression, Multi-Layer Perceptron, and Echo State Network.•SHAP values are monotonic, trendable, and prognosable. Maintenance decisions in domains such as aeronautics are becoming increas...
Saved in:
Published in | Artificial intelligence Vol. 306; p. 103667 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Amsterdam
Elsevier B.V
01.05.2022
Elsevier Science Ltd |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •It is important to study the interpretability of prognostics models.•We analyze the interpretability of Linear Regression, Multi-Layer Perceptron, and Echo State Network.•SHAP values are monotonic, trendable, and prognosable.
Maintenance decisions in domains such as aeronautics are becoming increasingly dependent on being able to predict the failure of components and systems. When data-driven techniques are used for this prognostic task, they often face headwinds due to their perceived lack of interpretability. To address this issue, this paper examines how features used in a data-driven prognostic approach correlate with established metrics of monotonicity, trendability, and prognosability. In particular, we use the SHAP model (SHapley Additive exPlanations) from the field of eXplainable Artificial Intelligence (XAI) to analyze the outcome of three increasingly complex algorithms: Linear Regression, Multi-Layer Perceptron, and Echo State Network. Our goal is to test the hypothesis that the prognostics metrics correlate with the SHAP model's explanations, i.e., the SHAP values. We use baseline data from a standard data set that contains several hundred run-to-failure trajectories for jet engines. The results indicate that SHAP values track very closely with these metrics with differences observed between the models that support the assertion that model complexity is a significant factor to consider when explainability is a consideration in prognostics. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0004-3702 1872-7921 1872-7921 |
DOI: | 10.1016/j.artint.2022.103667 |