Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter

Understanding predictions made by Machine Learning models is critical in many applications. In this work, we investigate the performance of two methods for explaining tree-based models- Tree Interpreter (TI) and SHapley Additive exPlanations TreeExplainer (SHAP-TE). Using a case study on detecting a...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Sharma, Pulkit, Shezan, Rohinton Mirzan, Bhandari, Apurva, Pimpley, Anish, Eswaran, Abhiram, Srinivasan, Soundar, Shao, Liqun
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 13.10.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Understanding predictions made by Machine Learning models is critical in many applications. In this work, we investigate the performance of two methods for explaining tree-based models- Tree Interpreter (TI) and SHapley Additive exPlanations TreeExplainer (SHAP-TE). Using a case study on detecting anomalies in job runtimes of applications that utilize cloud-computing platforms, we compare these approaches using a variety of metrics, including computation time, significance of attribution value, and explanation accuracy. We find that, although the SHAP-TE offers consistency guarantees over TI, at the cost of increased computation, consistency does not necessarily improve the explanation performance in our case study.
ISSN:2331-8422