EEG emotion recognition based on TQWT-features and hybrid convolutional recurrent neural network

•A TQWT-feature extraction method is proposed to obtain the effective patterns of EEG signals.•The TQWT features in high-frequency bands are more suitable for EEG emotion recognition.•A novel spatiotemporal representation structured for EEG signals is constructed.•A lightweight deep learning model b...

Full description

Saved in:
Bibliographic Details
Published inBiomedical signal processing and control Vol. 79; p. 104211
Main Authors Zhong, Mei-yu, Yang, Qing-yu, Liu, Yi, Zhen, Bo-yu, Zhao, Feng-da, Xie, Bei-bei
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•A TQWT-feature extraction method is proposed to obtain the effective patterns of EEG signals.•The TQWT features in high-frequency bands are more suitable for EEG emotion recognition.•A novel spatiotemporal representation structured for EEG signals is constructed.•A lightweight deep learning model based on CNN and LSTM for EEG emotion recognition is proposed. Electroencephalogram (EEG)-based emotion recognition has gained high attention in Brain-Computer Interfaces. However, due to the non-linearity and non-stationarity of EEG signals, it is difficult to analyze and extract effective emotional information from these signals. In this paper, a novel EEG-based emotion recognition framework is proposed, which includes Tunable Q-factor Wavelet Transform (TQWT)-feature extraction method, a new spatiotemporal representation of multichannel EEG signals and a Hybrid Convolutional Recurrent Neural Network (HCRNN). According to the oscillation behavior of signals, TQWT is first employed to decompose EEG into several sub-bands with stationarity characteristics. The features of mean absolute value and differential entropy are extracted from these sub-bands and named as TQWT-features. Next, the TQWT-features are transformed into TQWT-Feature Block Sequences (TFBSs) as the spatiotemporal representation to train the deep model. Then, the HCRNN model is introduced, which is fused by a lightweight Convolutional Neural Network (CNN) and a recurrent neural network with Long-Short Term Memory (LSTM). CNN is utilized to learn the spatial correlated context information of TFBSs, and LSTM is further adopted to capture the temporal dependency from CNN’s outputs. Finally, extensive subject-dependent experiments are carried out on SEED dataset to classify positive, neutral, negative emotional states. The experimental results demonstrate that the TQWT-features in high-frequency sub-bands are effective for EEG-based emotion recognition tasks. The recognition accuracy of HCRNN with TFBSs achieves superior performance (95.33 ± 1.39 %), which outperforms state-of-the-art deep learning models.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2022.104211