Domain-adaptive emotion recognition based on horizontal vertical flow representation of EEG signals
With the development of cognitive science and brain science, brain-computer interface technology can use Electroencephalogram (EEG) signals to better represent the inner changes of emotions. In this paper, A video-induced emotional stimulation experimental paradigm was designed, and the EEG signals...
Saved in:
Published in | IEEE access Vol. 11; p. 1 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | With the development of cognitive science and brain science, brain-computer interface technology can use Electroencephalogram (EEG) signals to better represent the inner changes of emotions. In this paper, A video-induced emotional stimulation experimental paradigm was designed, and the EEG signals of 15 hearing-impaired subjects under three emotions (positive, neutral, and negative) were collected. Considering the flow diffusion properties of EEG signals, we used the diffusion effect based on horizontal representation and vertical representation forms to obtain the spatial domain features. After EEG preprocessing, the differential entropy feature (DE) in the frequency domain is extracted. The frequency domain features of 62 channels are delivered to two Bi-directional Long Short-Term Memory (BiLSTM) to obtain spatial domain features of horizontal and vertical representations respectively, and then two kinds of domain features are fused by the residual network. The attention mechanism is applied to effectively extract emotional representational information from the fused features. To solve the cross-subject problem of emotion recognition, the domain adaptation method is utilized, and a center alignment loss function is applied to increase the distance of inter-class and reduce the distance of intra-class. According to the experimental results, the average accuracies of 75.89% (subject- dependent) and 69.59% (cross-subject) are obtained. Moreover, the validation was also performed on the public dataset SEED, achieving average accuracies of 93.99% (subject-dependent) and 84.22% (cross-subject), respectively. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2023.3270977 |