Multi-Modal Domain Adaptation Variational Auto-encoder for EEG-Based Emotion Recognition

Traditional electroencephalograph(EEG)-based emo-tion recognition requires a large number of calibration samples to build a model for a specific subject,which restricts the application of the affective brain computer interface(BCI)in practice.We attempt to use the multi-modal data from the past sess...

Full description

Saved in:
Bibliographic Details
Published in自动化学报(英文版) Vol. 9; no. 9; pp. 1612 - 1626
Main Authors Yixin Wang, Shuang Qiu, Dan Li, Changde Du, Bao-Liang Lu, Huiguang He
Format Journal Article
LanguageEnglish
Published Research Center for Brain-inspired Intelligence,National Laboratory of Pattern Recognition,Institute of Automation,Chinese Academy of Science,Beijing 100190 01.09.2022
University of Chinese Academy of Sciences,Beijing 100049%Research Center for Brain-inspired Intelligence,National Laboratory of Pattern Recognition,Institute of Automation,Chinese Academy of Science,Beijing 100190
Beijing Institute of Control and Electronic Technology,Beijing 100038,China%Research Center for Brain-inspired Intelligence,National Laboratory of Pattern Recognition,Institute of Automation,Chinese Academy of Science,Beijing 100190
Center for Excellence in Brain Science and Intelligence Technology,Chinese Academy of Science,Beijing,China
University of Chinese Academy of Sciences,Beijing 100049
School of Mathematics and Information Sciences,Yantai University,Yantai 264003,China%Department of Computer Science and Engineering,Shanghai Jiao Tong University,Shanghai 200240,China%Research Center for Brain-inspired Intelligence,National Laboratory of Pattern Recognition,Institute of Automation,Chinese Academy of Science,Beijing 100190
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Traditional electroencephalograph(EEG)-based emo-tion recognition requires a large number of calibration samples to build a model for a specific subject,which restricts the application of the affective brain computer interface(BCI)in practice.We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples.To solve this problem,we propose a multi-modal domain adaptive variational autoencoder(MMDA-VAE)method,which learns shared cross-domain latent representations of the multi-modal data.Our method builds a multi-modal variational autoencoder(MVAE)to project the data of multiple modalities into a common space.Through adversarial learning and cycle-consistency regularization,our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge.Extensive experiments are conducted on two public datasets,SEED and SEED-Ⅳ,and the results show the superiority of our proposed method.Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.
ISSN:2329-9266
DOI:10.1109/JAS.2022.105515