Subject-Independent Brain-Computer Interface for Decoding High-Level Visual Imagery Tasks

Brain-computer interface (BCI) is used for communication between humans and devices by recognizing humans' status and intention. Communication between humans and a drone using electroencephalogram (EEG) signals is one of the most challenging issues in the BCI domain. In particular, the control...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC) pp. 3396 - 3401
Main Authors Lee, Dae-Hyeok, Han, Dong-Kyun, Kim, Sung-Jin, Jeong, Ji-Hoon, Lee, Seong-Whan
Format Conference Proceeding
LanguageEnglish
Published IEEE 17.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Brain-computer interface (BCI) is used for communication between humans and devices by recognizing humans' status and intention. Communication between humans and a drone using electroencephalogram (EEG) signals is one of the most challenging issues in the BCI domain. In particular, the control of drone swarms (the direction and formation) has more advantages compared to the control of a drone. The visual imagery (VI) paradigm is that subjects visually imagine specific objects or scenes. Reduction of the variability among subjects' EEG signals is essential for practical BCI-based systems. In this study, we proposed the subepoch-wise feature encoder (SEFE) to improve the performances in the subject-independent tasks by using the VI dataset. This study is the first attempt to demonstrate the possibility of generalization among subjects in the VI-based BCI. We used the leave-one-subject-out cross-validation for evaluating the performances. We obtained higher performances when including our proposed module than excluding our proposed module. The DeepConvNet with SEFE showed the highest performance of 0.72 among six different decoding models. Hence, we demonstrated the feasibility of decoding the VI dataset in the subject-independent task with robust performances by using our proposed module.
ISSN:2577-1655
DOI:10.1109/SMC52423.2021.9659151