Hierarchical feature distillation model via dual-stage projections and graph embedding label propagation for emotion recognition
In multi-source domain adaptation, challenges include negative transfer caused by feature coupling and the inefficiency of pseudo-label generation. This paper develops a multi-source domain adaptive framework for EEG-based recognition (MSGELP), which integrates a two-stage projection matrix decoupli...
Saved in:
Published in | Pattern recognition Vol. 171; p. 112143 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.03.2026
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In multi-source domain adaptation, challenges include negative transfer caused by feature coupling and the inefficiency of pseudo-label generation. This paper develops a multi-source domain adaptive framework for EEG-based recognition (MSGELP), which integrates a two-stage projection matrix decoupling mechanism and graph-embedded label propagation. The method employs a dynamic source selection mechanism that adaptively selects the top-K most similar source domains based on similarity evaluation across target-source domain pairs, while eliminating latent sources of negative transfer. At the feature decoupling level, a learnable two-stage projection matrix, including a global projection matrix and an alignment projection matrix, is designed to explicitly separate cross-domain knowledge: the global projection matrix extracts common feature spanning multiple domains, while the alignment projection matrix captures domain-specific feature of source-target pairs, preserving discriminative information while avoiding feature entanglement. Furthermore, by constructing a similarity graph of source-target domain pairs and iteratively propagating labels, graph embedding techniques, along with iterative updates to the projection matrices, achieve continuous cross-domain knowledge distillation, effectively improving pseudo-label generation accuracy. Finally, rigorous testing of the cross-subject leave-one-subject-out cross-validation strategy on the SEED-IV and SEED-V datasets achieved classification accuracies of 68.70 % and 63.09 %, respectively. Experimental results indicate that the MSGELP effectively learns a shared subspace, mitigates the negative transfer problem, and outperforms state-of-the-art methods. The code is available at https://github.com/czihan1022/MSGELP/. |
---|---|
ISSN: | 0031-3203 |
DOI: | 10.1016/j.patcog.2025.112143 |