xEEGNet: towards explainable AI in EEG dementia classification
Objective. This work presents xEEGNet, a novel, compact, and explainable neural network for electroencephalography (EEG) data analysis. It is fully interpretable and reduces overfitting through a major parameter reduction. Approach. As an applicative use case to develop our model, we focused on the...
Saved in:
Published in | Journal of neural engineering Vol. 22; no. 4; pp. 46042 - 46060 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
England
IOP Publishing
01.08.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Objective. This work presents xEEGNet, a novel, compact, and explainable neural network for electroencephalography (EEG) data analysis. It is fully interpretable and reduces overfitting through a major parameter reduction. Approach. As an applicative use case to develop our model, we focused on the classification of common dementia conditions, Alzheimer’s and frontotemporal dementia, versus controls. xEEGNet, however, is broadly applicable to other neurological conditions involving spectral alterations. We used ShallowNet, a simple and popular model in the EEGNet family, as a starting point. Its structure was analyzed and gradually modified to move from a ‘black box’ model to a more transparent one, without compromising performance. The learned kernels and weights were analyzed from a clinical standpoint to assess their medical significance. Model variants, including ShallowNet and the final xEEGNet, were evaluated using a robust nested-leave-n-subjects out cross-validation for unbiased performance estimates. Variability across data splits was explained using the embedded EEG representations, grouped by class and set, with pairwise separability to quantify group distinction. Overfitting was measured through training-validation loss correlation and training speed. Main results. xEEGNet uses only 168 parameters, 200 times fewer than ShallowNet, yet retains interpretability, resists overfitting, achieves comparable median performance (−1.5%), and reduces performance variability across splits. This variability is explained by the embedded EEG representations: higher accuracy correlates with greater separation between test-set controls and Alzheimer’s cases, without significant influence from the training data. Significance. The capability of xEEGNet to filter specific EEG bands, learns band specific topographies and use the right EEG spectral bands for disease classification demonstrates its interpretability. While big deep learning models are typically prioritized for performance, this study shows that smaller architectures like xEEGNet can be equally effective in pathology classification, using EEG data. |
---|---|
Bibliography: | JNE-109294 ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1741-2560 1741-2552 1741-2552 |
DOI: | 10.1088/1741-2552/adf6e6 |