Multimodal Multitask Neural Network for Motor Imagery Classification With EEG and fNIRS Signals

Brain–computer interface (BCI) based on motor imagery (MI) can control external applications by decoding different brain physiological signals, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Traditional unimodal-based MI decoding methods cannot obtain satisfa...

Full description

Saved in:
Bibliographic Details
Published inIEEE sensors journal Vol. 22; no. 21; pp. 20695 - 20706
Main Authors He, Qun, Feng, Lufeng, Jiang, Guoqian, Xie, Ping
Format Journal Article
LanguageEnglish
Published New York The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 01.11.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Brain–computer interface (BCI) based on motor imagery (MI) can control external applications by decoding different brain physiological signals, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Traditional unimodal-based MI decoding methods cannot obtain satisfactory classification performance due to the limited representation ability in EEG or fNIRS signals. Usually, different brain signals have complementarity with different sensitivity to different MI patterns. To improve the recognition rate and generalization ability of MI, we propose a novel end-to-end multimodal multitask neural network (M2NN) model with the fusion of EEG and fNIRS signals. M2NN method integrates the spatial–temporal feature extraction module, multimodal feature fusion module, and multitask learning (MTL) module. Specifically, the MTL module includes two learning tasks, namely one main classification task for MI and one auxiliary task with deep metric learning. This approach was evaluated using a public multimodal dataset, and experimental results show that M2NN achieved the classification accuracy improvement of 8.92%, 6.97%, and 8.62% higher than multitask unimodal EEG signal model (MEEG), multitask unimodal HbR signal model (MHbR), and multimodal single-task (MDNN), respectively. Classification accuracies of multitasking methods of MEEG, MHbR, and M2NN are improved by 4.8%, 4.37%, and 8.62% compared with single-task methods EEG, HbR, and MDNN, respectively. The M2NN method achieved the best classification performance of the six methods, with the average accuracy of 29 subjects being 82.11% ± 7.25%. The effectiveness of multimodal fusion and MTL was verified. The M2NN method is superior to baseline and state-of-the-art (SOTA) methods.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2022.3205956