Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter
Understanding predictions made by Machine Learning models is critical in many applications. In this work, we investigate the performance of two methods for explaining tree-based models- Tree Interpreter (TI) and SHapley Additive exPlanations TreeExplainer (SHAP-TE). Using a case study on detecting a...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
13.10.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Understanding predictions made by Machine Learning models is critical in many
applications. In this work, we investigate the performance of two methods for
explaining tree-based models- Tree Interpreter (TI) and SHapley Additive
exPlanations TreeExplainer (SHAP-TE). Using a case study on detecting anomalies
in job runtimes of applications that utilize cloud-computing platforms, we
compare these approaches using a variety of metrics, including computation
time, significance of attribution value, and explanation accuracy. We find
that, although the SHAP-TE offers consistency guarantees over TI, at the cost
of increased computation, consistency does not necessarily improve the
explanation performance in our case study. |
---|---|
DOI: | 10.48550/arxiv.2010.06734 |