Emotion Recognition From EEG Signals of Hearing-Impaired People Using Stacking Ensemble Learning Framework Based on a Novel Brain Network

Emotion recognition based on electroencephalography (EEG) signals has become an interesting research topic in the field of neuroscience, psychology, neural engineering, and computer science. However, the existing studies are mainly focused on normal or depression subjects, and few reports on hearing...

Full description

Saved in:
Bibliographic Details
Published inIEEE sensors journal Vol. 21; no. 20; pp. 23245 - 23255
Main Authors Kang, Qiaoju, Gao, Qiang, Song, Yu, Tian, Zekun, Yang, Yi, Mao, Zemin, Dong, Enzeng
Format Journal Article
LanguageEnglish
Published New York IEEE 15.10.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Emotion recognition based on electroencephalography (EEG) signals has become an interesting research topic in the field of neuroscience, psychology, neural engineering, and computer science. However, the existing studies are mainly focused on normal or depression subjects, and few reports on hearing-impaired subjects. In this work, we have collected the EEG signals of 15 hearing-impaired subjects for categorizing three types of emotions (positive, neutral, and negative). To study the differences in functional connectivity between normal and hearing-impaired subjects under different emotional states, a novel brain network stacking ensemble learning framework was proposed. The phase-locking value (PLV) was utilized to calculate the correlation between EEG channels, and then we constructed a brain network using double thresholds. The spatial features of the brain network were extracted from the perspectives of local differentiation and global integration. In addition, the stacking ensemble learning framework was used to classify the fused features. To evaluate the proposed model, extensive experiments were carried out on the SEED dataset, and the result shows that the proposed method achieved superior performance than state-of-the-art models, in which the average classification accuracy is 0.955 (std: 0.052). In addition, the experimental results of hearing-impaired emotion recognition show that the average classification accuracy is 0.984 (std: 0.005). Finally, we investigated the activation patterns to reveal important brain regions and inter-channel relations about emotion recognition.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2021.3108471