Model-Agnostic Interpretability with Shapley Values
The ability to explain in understandable terms, why a machine learning model makes a certain prediction is becoming immensely important, as it ensures trust and transparency in the decision process of the model. Complex models, such as ensemble or deep learning models, are hard to interpret. Various...
Saved in:
Published in | 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) pp. 1 - 7 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.07.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The ability to explain in understandable terms, why a machine learning model makes a certain prediction is becoming immensely important, as it ensures trust and transparency in the decision process of the model. Complex models, such as ensemble or deep learning models, are hard to interpret. Various methods have been proposed that deal with this matter. Shapley values provide accurate explanations, as they assign each feature an importance value for a particular prediction. However, the exponential complexity of their calculation is dealt efficiently only in decision tree-based models. Another method is surrogate models, which emulate a black-box model's behavior and provide explanations effortlessly, since they are constructed to be interpretable. Surrogate models are model-agnostic, but they produce only approximate explanations, which cannot always be trusted. We propose a method that combines these two approaches, so that we can take advantage of the model-agnostic part of the surrogate models, as well as the explanatory power of the Shapley values. We introduce a new metric, Top j Similarity, that measures the similitude of two given explanations, produced by Shapley values, in order to evaluate our work. Finally, we recommend ways on how this method could be improved further. |
---|---|
DOI: | 10.1109/IISA.2019.8900669 |