Subject-Independent Classification of Motor Imagery Tasks in EEG Using Multisubject Ensemble CNN
Subject-independent (SI) classification is a major area of investigation in Brain-Computer Interface (BCI) that aims to construct classifiers of users’ mental states based on collected electroencephalogram (EEG) of independent subjects. Significant inter-subject variabilities in the EEG are among th...
Saved in:
Published in | IEEE access Vol. 10; pp. 81355 - 81363 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
2022
IEEE |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Subject-independent (SI) classification is a major area of investigation in Brain-Computer Interface (BCI) that aims to construct classifiers of users’ mental states based on collected electroencephalogram (EEG) of independent subjects. Significant inter-subject variabilities in the EEG are among the most challenging issues in designing SI BCI systems. In this work, we propose and examine the utility of Multi-Subject Ensemble Convolutional Neural Network (MS-En-CNN) for SI classification of motor imagery (MI) tasks. The base classifiers used in MS-En-CNN have a fixed CNN architecture (referred to as DeepConvNet) that are trained using data collected from multiple subjects during the training process. In this regard, training subjects are divided into [Formula Omitted]-folds using which [Formula Omitted] base DeepConvNets are trained based on data from [Formula Omitted] folds, whereas the hyperparameter optimization is performed using the held-out fold. We evaluate the performance of the MS-En-CNN on the large open-access MI dataset from the literature, which includes 54 participants and a total number of 21,600 trials. The result shows that the MS-En-CNN achieves the highest single-trial SI classification performance reported on this dataset. In particular, we obtained SI classification performances with average and median accuracies of 85.42% and 86.50% (± 10.16%), respectively. This result exhibits a statistically significant improvement ([Formula Omitted]) over the best previously reported result with an average and a median accuracy of 84.19% and 84.50% (±10.08%), respectively. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3195513 |