Cross-subject and cross-experimental classification of mental fatigue based on two-stream self-attention network
Mental fatigue detection based on Electroencephalogram (EEG) is an objective and effective detection method. However, individual variability and variability in mental fatigue experimental paradigms limit the generalizability of classification models across subjects and experiments. This paper propos...
Saved in:
Published in | Biomedical signal processing and control Vol. 88; p. 105638 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.02.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Mental fatigue detection based on Electroencephalogram (EEG) is an objective and effective detection method. However, individual variability and variability in mental fatigue experimental paradigms limit the generalizability of classification models across subjects and experiments. This paper proposes a Spatio-Temporal Transformer (STTransformer) architecture based on a two-stream attention network. We use datasets from three different mental fatigue experimental tasks and individuals. STTransformer has performed cross-task and cross-subject mental fatigue transfer learning and achieved promising results. This architecture is based on the idea of model migration, pre-training deep neural network parameters in the source domain to obtain prior knowledge, freezing some network parameters and migrating to the target domain containing similar samples for fine-tuning. This architecture achieves good transfer effects by using multiple attention mechanisms to capture common features between different individuals and experimental paradigms. Good performance was achieved in multiple individual and two mental fatigue experiments. We used the attention mechanism to visualize part of the feature maps, showing two characteristics of mental fatigue, and exploring deep learning interpretability.
•Transfer models available for different mental fatigue and subjects.•Models only require a small sample size to be effective.•The accuracy of the Parallel STTransformer model on the two datasets is 89.66% and 87.76%. |
---|---|
ISSN: | 1746-8094 |
DOI: | 10.1016/j.bspc.2023.105638 |