Dynamic threshold distribution domain adaptation network: a cross-subject fatigue recognition method based on EEG signals
Electroencephalogram (EEG) has been widely used in driver fatigue recognition due to its high-resolution and non-invasive characteristics in recent years. However, EEG signals have strong variability and significant individual differences, which put forward extremely high requirements on the general...
Saved in:
Published in | IEEE transactions on cognitive and developmental systems Vol. 16; no. 1; p. 1 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.02.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Electroencephalogram (EEG) has been widely used in driver fatigue recognition due to its high-resolution and non-invasive characteristics in recent years. However, EEG signals have strong variability and significant individual differences, which put forward extremely high requirements on the generalization ability of the model in practical use. To address these issues, a dynamic threshold distribution domain adaptation network (DTDDAN) is proposed to classify the EEG signals for driver fatigue recognition. In the model, the domain discriminator is introduced to alleviate the marginal distribution difference of different domains, and a new loss, Jensen-Shannon loss (JS loss), is applied to relieve the conditional distribution difference of different domains to learn cross-subject invariant deep features of EEG signals. In addition, we propose a dynamic threshold pseudo label strategy to assign pseudo labels to the samples of the target domain in the training process of the model. This strategy can make sure that the high-quality unlabeled data of the target domain contribute to the model training and the model can fully learn the information of the target domain. With extensive experiments on our self-constructed EEG-based fatigue driving dataset, competitive performance is achieved for the cross-subject classification task, with average classification sensitivity, specificity, and accuracy of 89.8%,93.8% and 92.2%, respectively. And it is shown that our model can effectively extract cross-subject invariant EEG signals deep features by the experimental results. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2379-8920 2379-8939 |
DOI: | 10.1109/TCDS.2023.3257428 |