Variational Autoencoder based Latent Factor Decoding of Multichannel EEG for Emotion Recognition

Robust cross-subject emotion recognition based on multichannel EEG has always been a hard work. In this work, we hypothesize there exists default brain variables across subjects in emotional processes. Hence, the states of the latent variables that related to emotional processing must contribute to...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) pp. 684 - 687
Main Authors Li, Xiang, Zhao, Zhigang, Song, Dawei, Zhang, Yazhou, Niu, Chunyang, Zhang, Junwei, Huo, Jidong, Li, Jing
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Robust cross-subject emotion recognition based on multichannel EEG has always been a hard work. In this work, we hypothesize there exists default brain variables across subjects in emotional processes. Hence, the states of the latent variables that related to emotional processing must contribute to building robust recognition models. We propose to utilize variational autoencoder (VAE) to determine the latent factors from the multichannel EEG. Through sequence modeling method, we examine the emotion recognition performance based on the learnt latent factors. The performance of the proposed methodology is verified on two public datasets (DEAP and SEED), and compared with traditional matrix factorization based (ICA) and autoencoder based (AE) approaches. Experimental results demonstrate that neural network is suitable for unsupervised EEG modeling and our proposed emotion recognition framework achieves the state-of-the-art performance. As far as we know, it is the first work that introduces VAE into multichannel EEG decoding for emotion recognition.
DOI:10.1109/BIBM47256.2019.8983341