Classification Algorithm for fNIRS-based Brain Signals Using Convolutional Neural Network with Spatiotemporal Feature Extraction Mechanism

This graphic shows the structure of our network. In the preprocessing section, we used the Beer-Lambert law to convert the optical signals into hemodynamic HbR and HbO. We used an end-to-end structure without much preprocessing of the raw fNIRS signal. We input the signal with the number of channels...

Full description

Saved in:
Bibliographic Details
Published inNeuroscience Vol. 542; pp. 59 - 68
Main Authors Qin, Yuxin, Li, Baojiang, Wang, Wenlong, Shi, Xingbin, Peng, Cheng, Lu, Yifan
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 26.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This graphic shows the structure of our network. In the preprocessing section, we used the Beer-Lambert law to convert the optical signals into hemodynamic HbR and HbO. We used an end-to-end structure without much preprocessing of the raw fNIRS signal. We input the signal with the number of channels C = 24 and the number of samples T = 351. the original MI and MA signals are first passed through a convolution block. The convolution block consists of a 2D time convolution, a depth convolution, and a separable convolution, each followed by a Batch Normalization layer, an ELU activation function, an average pooling layer, and a dropout layer. Afterwards, spatio-temporal feature extraction is performed by spatial attention and temporal convolutional networks, capable of reducing overfitting. Finally, the fNIRS signal is classified as MI or MA. The results show that the method using only 3.23 K training parameters has an accuracy of 85.63% (HbO) and 86.21% (HbR) in the MI task and 96.84% (HbO) and 94.83% (HbR) in the MA task. [Display omitted] •fNIRS decoding performance improvement.•Using Convolutional Neural Networks for fNIRS Classification.•Spatial attention mechanisms can capture remote contextual information.•Temporal convolutional network outperforms most RNN in time-series tasks. Brain Computer Interface (BCI) is a highly promising human–computer interaction method that can utilize brain signals to control external devices. BCI based on functional near-infrared spectroscopy (fNIRS) is considered a relatively new and promising paradigm. fNIRS is a technique of measuring functional changes in cerebral hemodynamics. It detects changes in the hemodynamic activity of the cerebral cortex by measuring oxyhemoglobin and deoxyhemoglobin (HbR) concentrations and inversely predicts the neural activity of the brain. At the present time, Deep learning (DL) methods have not been widely used in fNIRS decoding, and there are fewer studies considering both spatial and temporal dimensions for fNIRS classification. To solve these problems, we proposed an end-to-end hybrid neural network for feature extraction of fNIRS. The method utilizes a spatial–temporal convolutional layer for automatic extraction of temporally valid information and uses a spatial attention mechanism to extract spatially localized information. A temporal convolutional network (TCN) is used to further utilize the temporal information of fNIRS before the fully connected layer. We validated our approach on a publicly available dataset including 29 subjects, including left-hand and right-hand motor imagery (MI), mental arithmetic (MA), and a baseline task. The results show that the method has few training parameters and high accuracy, providing a meaningful reference for BCI development.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0306-4522
1873-7544
1873-7544
DOI:10.1016/j.neuroscience.2024.02.011