Independent Vector Analysis for Feature Extraction in Motor Imagery Classification

Independent vector analysis (IVA) can be viewed as an extension of independent component analysis (ICA) to multiple datasets. It exploits the statistical dependency between different datasets through mutual information. In the context of motor imagery classification based on electroencephalogram (EE...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 24; no. 16; p. 5428
Main Authors Moraes, Caroline Pires Alavez, dos Santos, Lucas Heck, Fantinato, Denis Gustavo, Neves, Aline, Adali, Tülay
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 22.08.2024
MDPI
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Independent vector analysis (IVA) can be viewed as an extension of independent component analysis (ICA) to multiple datasets. It exploits the statistical dependency between different datasets through mutual information. In the context of motor imagery classification based on electroencephalogram (EEG) signals for the brain–computer interface (BCI), several methods have been proposed to extract features efficiently, mainly based on common spatial patterns, filter banks, and deep learning. However, most methods use only one dataset at a time, which may not be sufficient for dealing with a multi-source retrieving problem in certain scenarios. From this perspective, this paper proposes an original approach for feature extraction through multiple datasets based on IVA to improve the classification of EEG-based motor imagery movements. The IVA components were used as features to classify imagined movements using consolidated classifiers (support vector machines and K-nearest neighbors) and deep classifiers (EEGNet and EEGInception). The results show an interesting performance concerning the clustering of MI-based BCI patients, and the proposed method reached an average accuracy of 86.7%.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s24165428