VAE-CapsNet: A common emotion information extractor for cross-subject emotion recognition

Owing to the uniqueness of brain structure, function, and emotional experiences, neural activity patterns differ among subjects. As a result, affective brain–computer interfaces (aBCIs) must account for individual differences in neural activity, electroencephalogram data, and particularly emotion pa...

Full description

Saved in:
Bibliographic Details
Published inKnowledge-based systems Vol. 311; p. 113018
Main Authors Chen, Huayu, Li, Junxiang, He, Huanhuan, Sun, Shuting, Zhu, Jing, Li, Xiaowei, Hu, Bin
Format Journal Article
LanguageEnglish
Published Elsevier B.V 28.02.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Owing to the uniqueness of brain structure, function, and emotional experiences, neural activity patterns differ among subjects. As a result, affective brain–computer interfaces (aBCIs) must account for individual differences in neural activity, electroencephalogram data, and particularly emotion pattern (EP). These differences in emotion information types and distribution patterns, such as session EP differences (SEPD) and individual EP differences (IEPD), pose notable challenges for cross-subject and cross-session emotion classification. To address these challenges, we propose a novel common emotion information extraction framework VAE-CapsNet that combines a variational autoencoder (VAE) and capsule network (CapsNet). A VAE-based unsupervised EP transformation module is used to mitigate SEPD, while five segmental activation functions are introduced to match EPs across different subjects. The CapsNet-based information extractor efficiently handles various emotion information, producing universal emotional features from different sessions. We validated the performance of the VAE-CapsNet framework through cross-session, cross-subject, and cross-dataset experiments on the SEED, SEED-IV, SEED-V, and FACED datasets. •Nine types of emotion information are refined within the positive–negative–neutral scene.•A VAE-based EP transformation framework is proposed to address the SEPD problem.•Information adjustment functions are designed to process and align different emotion information.•A CapsNet-based common emotion information extractor is introduced to tackle the IEPD problem.
ISSN:0950-7051
DOI:10.1016/j.knosys.2025.113018