Multi-Modal Multi-Task Neural Network for Motor Imagery Classification with EEG and fNIRS signals
Brain-computer interface (BCI) based on motor imagery (MI) can control external applications by decoding different brain physiological signals, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Traditional unimodal-based MI decoding methods cannot obtain satisfa...
Saved in:
Published in | IEEE sensors journal p. 1 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
IEEE
2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Brain-computer interface (BCI) based on motor imagery (MI) can control external applications by decoding different brain physiological signals, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Traditional unimodal-based MI decoding methods cannot obtain satisfactory classification performance due to the limited representation ability in EEG or fNIRS signals. Usually, different brain signals have complementarity with different sensitivity to different MI patterns. To improve the recognition rate and generalization ability of MI, we propose a novel end-to-end multi-modal multi-task neural network (M2NN) model with the fusion of EEG and fNIRS signals. M2NN method integrates spatial-temporal feature extraction module, multimodal feature fusion module, and multi-task learning (MTL) module. Specifically, the MTL module includes two learning tasks, namely, one main classification task for motor imagery and one auxiliary task with deep metric learning. This approach was evaluated using a public multi-modal dataset and experimental results show that M2NN achieved the classification accuracy improvement of 8.92%, 6.97%, and 8.62% higher than unimodal multi-task MEEG, MHbR, and multi-modal single-task MDNN, respectively. And classification accuracies of multi-task methods of MEEG, MHbR, and M2NN are improved by 4.8%,4.37%, and 8.62% compared with single-task methods EEG, HbR, and MDNN. M2NN method achieved the best classification performance of six methods, with the average accuracy of 29 subjects being 82.11% ± 7.25%. The effectiveness of multimodal fusion and multitask learning was verified. And M2NN method is superior to baseline and SOTA methods. |
---|---|
ISSN: | 1530-437X 1558-1748 |
DOI: | 10.1109/JSEN.2022.3205956 |