Supporting AI-Explainability by Analyzing Feature Subsets in a Machine Learning Model

Machine learning algorithms become increasingly prevalent in the field of medicine, as they offer the ability to recognize patterns in complex medical data. Especially in this sensitive area, the active usage of a mostly black box is a controversial topic. We aim to highlight how an aggregated and s...

Full description

Saved in:
Bibliographic Details
Published inStudies in health technology and informatics Vol. 294; p. 109
Main Authors Plagwitz, Lucas, Brenner, Alexander, Fujarski, Michael, Varghese, Julian
Format Journal Article
LanguageEnglish
Published Netherlands 25.05.2022
Subjects
Online AccessGet more information

Cover

Loading…
More Information
Summary:Machine learning algorithms become increasingly prevalent in the field of medicine, as they offer the ability to recognize patterns in complex medical data. Especially in this sensitive area, the active usage of a mostly black box is a controversial topic. We aim to highlight how an aggregated and systematic feature analysis of such models can be beneficial in the medical context. For this reason, we introduce a grouped version of the permutation importance analysis for evaluating the influence of entire feature subsets in a machine learning model. In this way, expert-defined subgroups can be evaluated in the decision-making process. Based on these results, new hypotheses can be formulated and examined.
ISSN:1879-8365