Monte Carlo-based Strategy for Assessing the Impact of EEG Data Uncertainty on Confidence in Convolutional Neural Network Classification

Electroencephalography (EEG) data acquisition process in Brain-Computer Interfaces (BCIs) is inevitably affected by uncertainty which introduces variability in the data. This variability, often over-looked, affects the training and testing of neural network (NN) models. This study evaluates the impa...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 13; p. 1
Main Authors Nzakuna, Pierre Sedi, Gallo, Vincenzo, Paciello, Vincenzo, Lay-Ekuakille, Aime, Lusala, Angelo Kuti
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Electroencephalography (EEG) data acquisition process in Brain-Computer Interfaces (BCIs) is inevitably affected by uncertainty which introduces variability in the data. This variability, often over-looked, affects the training and testing of neural network (NN) models. This study evaluates the impact of systematic bias (±2%) plus aleatoric uncertainty (2-5% random Gaussian perturbations) on the classification confidence of the EEGNet model for four class-Motor Imagery (MI) tasks using the BCI Competition IV 2a dataset. Through two Monte Carlo simulations with 100 iterations each, perturbed datasets were generated to mimic real-world EEG acquisition uncertainties. Softmax outputs served to analyze overlap in predicted probabilities and quantify model confidence in classification decisions. We introduce robust evaluation metrics, including the proportion of area under the curve (AUC) of probability density functions (PDFs) ≥ 70% accuracy, overlap coefficients, and percentile-based thresholding, which provide a more comprehensive assessment of model performance, capturing not only accuracy but also confidence and ambiguity in predictions. Results show that the robustness of EEGNet in the face of realistic measurement uncertainties is prone to inter-subject variability, with the model achieving higher confidence for Subject 1 (average 90.52%) compared to Subject 2 (62.96%). EEGNet demonstrates resilience to directional calibration shifts in the data, with model confidence varying by 0.22% for Subject 1 and by 6.24% for Subject 2, showing that aleatoric precision errors dominate over small systematic shifts. Our approach provides a rigorous framework for quantifying the impact of measurement variability on EEG-based BCI classification, thereby enhancing reliability and generalizability in practical BCI deployments.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2025.3570134