Hierarchical feature distillation model via dual-stage projections and graph embedding label propagation for emotion recognition

In multi-source domain adaptation, challenges include negative transfer caused by feature coupling and the inefficiency of pseudo-label generation. This paper develops a multi-source domain adaptive framework for EEG-based recognition (MSGELP), which integrates a two-stage projection matrix decoupli...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 171; p. 112143
Main Authors Ren, Chao, Chen, Jinbo, Li, Rui, Chen, Yijiang, Wang, Tianzhi, Zheng, Weihao, Zhang, Xiaowei, Hu, Bin
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.03.2026
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In multi-source domain adaptation, challenges include negative transfer caused by feature coupling and the inefficiency of pseudo-label generation. This paper develops a multi-source domain adaptive framework for EEG-based recognition (MSGELP), which integrates a two-stage projection matrix decoupling mechanism and graph-embedded label propagation. The method employs a dynamic source selection mechanism that adaptively selects the top-K most similar source domains based on similarity evaluation across target-source domain pairs, while eliminating latent sources of negative transfer. At the feature decoupling level, a learnable two-stage projection matrix, including a global projection matrix and an alignment projection matrix, is designed to explicitly separate cross-domain knowledge: the global projection matrix extracts common feature spanning multiple domains, while the alignment projection matrix captures domain-specific feature of source-target pairs, preserving discriminative information while avoiding feature entanglement. Furthermore, by constructing a similarity graph of source-target domain pairs and iteratively propagating labels, graph embedding techniques, along with iterative updates to the projection matrices, achieve continuous cross-domain knowledge distillation, effectively improving pseudo-label generation accuracy. Finally, rigorous testing of the cross-subject leave-one-subject-out cross-validation strategy on the SEED-IV and SEED-V datasets achieved classification accuracies of 68.70 % and 63.09 %, respectively. Experimental results indicate that the MSGELP effectively learns a shared subspace, mitigates the negative transfer problem, and outperforms state-of-the-art methods. The code is available at https://github.com/czihan1022/MSGELP/.
ISSN:0031-3203
DOI:10.1016/j.patcog.2025.112143