Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning Predictions

Local feature selection in machine learning provides instance-specific explanations by focusing on the most relevant features for each prediction, enhancing the interpretability of complex models. However, such methods tend to produce misleading explanations by encoding additional information in the...

Full description

Saved in:
Bibliographic Details
Main Authors Oosterhuis, Harrie, Lyu, Lijun, Anand, Avishek
Format Journal Article
LanguageEnglish
Published 16.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Local feature selection in machine learning provides instance-specific explanations by focusing on the most relevant features for each prediction, enhancing the interpretability of complex models. However, such methods tend to produce misleading explanations by encoding additional information in their selections. In this work, we attribute the problem of misleading selections by formalizing the concepts of label and feature leakage. We rigorously derive the necessary and sufficient conditions under which we can guarantee no leakage, and show existing methods do not meet these conditions. Furthermore, we propose the first local feature selection method that is proven to have no leakage called SUWR. Our experimental results indicate that SUWR is less prone to overfitting and combines state-of-the-art predictive performance with high feature-selection sparsity. Our generic and easily extendable formal approach provides a strong theoretical basis for future work on interpretability with reliable explanations.
DOI:10.48550/arxiv.2407.11778