EEG-eye movement based subject dependence, cross-subject, and cross-session emotion recognition with multidimensional homogeneous encoding space alignment

The joint learning of multimodal is helpful to extract the general information cross-modality in improving the performance of multimodal emotion recognition. However, focusing on a single common pattern can cause multimodal data to deviate from its original distribution and fail to fully capture the...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 251; p. 124001
Main Authors Zhu, Mu, Wu, Qingzhou, Bai, Zhongli, Song, Yu, Gao, Qiang
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
Abstract The joint learning of multimodal is helpful to extract the general information cross-modality in improving the performance of multimodal emotion recognition. However, focusing on a single common pattern can cause multimodal data to deviate from its original distribution and fail to fully capture the potential representation of the data. Therefore, we propose a multi-dimensional homogenous encoding spatial alignment (MHESA) method, which consists of two parts: multi-modal joint learning and modal knowledge transfer. To obtain a common projection space of EEG-EM features, we use a multimodal joint space encoder to learn the homogeneous joint space of EEG-Eye Movement (EM). To obtain a homogeneous encoding space based on modal knowledge, the knowledge transfer module learns the spatial distribution of EM features while retaining the original EEG features. The output of each module is used to construct a multidimensional homogeneous encoding space. The weight and multi-task loss function of the multi-dimensional homogeneous encoding space are dynamically adjusted by the Multi-task Joint Optimization Strategy (MJOS). By analyzing the effect of multi-task optimization, we found that compared with the subject dependence scene, the cross-subject scene has an advantage in the construction of joint encoding space, and the modal knowledge transfer feature has a higher contribution degree in cross-session. The experimental results show the MHESA method can make the model achieve more stable performance in three emotion recognition scenes.
AbstractList The joint learning of multimodal is helpful to extract the general information cross-modality in improving the performance of multimodal emotion recognition. However, focusing on a single common pattern can cause multimodal data to deviate from its original distribution and fail to fully capture the potential representation of the data. Therefore, we propose a multi-dimensional homogenous encoding spatial alignment (MHESA) method, which consists of two parts: multi-modal joint learning and modal knowledge transfer. To obtain a common projection space of EEG-EM features, we use a multimodal joint space encoder to learn the homogeneous joint space of EEG-Eye Movement (EM). To obtain a homogeneous encoding space based on modal knowledge, the knowledge transfer module learns the spatial distribution of EM features while retaining the original EEG features. The output of each module is used to construct a multidimensional homogeneous encoding space. The weight and multi-task loss function of the multi-dimensional homogeneous encoding space are dynamically adjusted by the Multi-task Joint Optimization Strategy (MJOS). By analyzing the effect of multi-task optimization, we found that compared with the subject dependence scene, the cross-subject scene has an advantage in the construction of joint encoding space, and the modal knowledge transfer feature has a higher contribution degree in cross-session. The experimental results show the MHESA method can make the model achieve more stable performance in three emotion recognition scenes.
ArticleNumber 124001
Author Gao, Qiang
Bai, Zhongli
Zhu, Mu
Wu, Qingzhou
Song, Yu
Author_xml – sequence: 1
  givenname: Mu
  surname: Zhu
  fullname: Zhu, Mu
  email: zm78792021@163.com
  organization: Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
– sequence: 2
  givenname: Qingzhou
  surname: Wu
  fullname: Wu, Qingzhou
  email: wqz9879@163.com
  organization: Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
– sequence: 3
  givenname: Zhongli
  surname: Bai
  fullname: Bai, Zhongli
  email: ZL.Bai@hotmail.com
  organization: Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
– sequence: 4
  givenname: Yu
  orcidid: 0000-0002-9295-7795
  surname: Song
  fullname: Song, Yu
  email: jasonsongrain@hotmail.com
  organization: Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
– sequence: 5
  givenname: Qiang
  surname: Gao
  fullname: Gao, Qiang
  email: gaoqiang@tjut.edu.cn
  organization: Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, TUT Maritime College, Tianjin University of Technology, Tianjin 300384, China
BookMark eNp9kEtOwzAQhi0EEqVwAVY-AAl2Xk4kNgiVglSJDawtP8apo8SuYreIq3BaEgobFqzG8sz3a-a7QKfOO0DompKUElrddimEd5FmJCtSmhWE0BO0oDXLk4o1-SlakKZkSUFZcY4uQuimAUYIW6DP1WqdwAfgwR9gABexFAE0DnvZgYpYww6cBqfgBqvRh5D8dG6wcPr3C0Kw3mEYfJzrCMq3zn6_323c4mHfR6vtlD_PiR5v_eBbcOD3AU_hXlvX4rATCrDobevmTS7RmRF9gKufukRvj6vXh6dk87J-frjfJConJCYmo7XMKqbyUuYlSGZkk4tGZkYyTQ0jRQkMCKvq2hgtaCbzhlTM0LKuoGzqfInqY-73MSMYrmwU8_JxFLbnlPDZMe_47JjPjvnR8YRmf9DdaAcxfvwP3R0hmI46WBh5UHY2rO0kLnLt7X_4F6CRnFQ
CitedBy_id crossref_primary_10_1016_j_knosys_2025_113238
crossref_primary_10_1016_j_aej_2024_12_081
crossref_primary_10_1016_j_eswa_2024_125089
Cites_doi 10.1016/j.bspc.2023.104741
10.1109/TIM.2024.3488141
10.1109/TNSRE.2022.3225948
10.1007/s11571-022-09851-w
10.1016/j.cognition.2023.105416
10.1109/TCDS.2021.3071170
10.1109/TCSS.2022.3153660
10.1145/3524499
10.1109/JBHI.2021.3092412
10.1109/TCSS.2023.3298324
10.1016/j.knosys.2023.110756
10.1037/emo0001144
10.1109/TNSRE.2023.3236687
10.1016/j.patcog.2022.108833
10.1109/TAMD.2015.2431497
10.1109/JAS.2022.105515
10.1109/ACCESS.2023.3270977
10.1016/j.compbiomed.2022.105303
10.1109/JSEN.2021.3119074
10.1016/j.jsc.2020.10.005
10.3758/s13428-021-01763-7
10.1109/TAFFC.2022.3170428
10.1038/s41593-019-0488-y
10.1016/j.bspc.2022.104314
10.1109/TCYB.2018.2797176
10.1088/1741-2552/ac49a7
ContentType Journal Article
Copyright 2024 Elsevier Ltd
Copyright_xml – notice: 2024 Elsevier Ltd
DBID AAYXX
CITATION
DOI 10.1016/j.eswa.2024.124001
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1873-6793
ExternalDocumentID 10_1016_j_eswa_2024_124001
S0957417424008674
GroupedDBID --K
--M
.DC
.~1
0R~
13V
1B1
1RT
1~.
1~5
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9JN
9JO
AAAKF
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AARIN
AAXUO
AAYFN
ABBOA
ABFNM
ABMAC
ABMVD
ABUCO
ACDAQ
ACGFS
ACHRH
ACNTT
ACRLP
ACZNC
ADBBV
ADEZE
ADTZH
AEBSH
AECPX
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGUMN
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
AKRWK
ALEQD
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
APLSM
AXJTR
BJAXD
BKOJK
BLXMC
BNSAS
CS3
DU5
EBS
EFJIC
EO8
EO9
EP2
EP3
F5P
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HAMUX
IHE
J1W
JJJVA
KOM
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
PQQKQ
Q38
RIG
ROL
RPZ
SDF
SDG
SDP
SDS
SES
SEW
SPC
SPCBC
SSB
SSD
SSL
SST
SSV
SSZ
T5K
TN5
~G-
29G
AAAKG
AAQXK
AATTM
AAXKI
AAYWO
AAYXX
ABJNI
ABKBG
ABWVN
ABXDB
ACNNM
ACRPL
ACVFH
ADCNI
ADJOM
ADMUD
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKYEP
ANKPU
APXCP
ASPBG
AVWKF
AZFZN
BNPGV
CITATION
EJD
FEDTE
FGOYB
G-2
HLZ
HVGLF
HZ~
LG9
LY1
LY7
R2-
SBC
SET
SSH
WUQ
XPP
ZMT
ID FETCH-LOGICAL-c300t-f218b267c35b35eb7fb93a9b2fb7d1f7045e7e07688ffda12b39067f1586e5983
IEDL.DBID .~1
ISSN 0957-4174
IngestDate Tue Jul 01 01:51:19 EDT 2025
Thu Apr 24 23:00:27 EDT 2025
Tue Jun 18 08:50:57 EDT 2024
IsPeerReviewed true
IsScholarly true
Keywords Multimodal joint learning
Multi-task learning
Knowledge transfer
Multidimensional homogeneous encoding space
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c300t-f218b267c35b35eb7fb93a9b2fb7d1f7045e7e07688ffda12b39067f1586e5983
ORCID 0000-0002-9295-7795
ParticipantIDs crossref_citationtrail_10_1016_j_eswa_2024_124001
crossref_primary_10_1016_j_eswa_2024_124001
elsevier_sciencedirect_doi_10_1016_j_eswa_2024_124001
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2024-10-01
2024-10-00
PublicationDateYYYYMMDD 2024-10-01
PublicationDate_xml – month: 10
  year: 2024
  text: 2024-10-01
  day: 01
PublicationDecade 2020
PublicationTitle Expert systems with applications
PublicationYear 2024
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Al-Quraishi, Elamvazuthi, Tang, Muhammad, Parasuraman, Borboni (b0005) 2021; 21
Zheng, Liu, Lu, Lu, Cichocki (b0185) 2018; 49
Li (b0065) 2023; 14
Çelik (b0020) 2021; 104
Zheng, Lu (b0190) 2015; 7
Zhao (b0175) 2019
Li (b0070) 2023; 276
Zheng, Hsiao (b0180) 2023; 23
Chen (b0030) 2023; 80
Quan (b0120) 2023; 84
Ma, Zhao, Meng, Zhang, She, Zhang (b0115) 2023; 31
Gautier, El Haj (b0035) 2023; 235
Long, Cao, Wang, Jordan (b0105) 2018
Bayoudh (b0015) 2021
Gong, Chen, Zhang (b0040) 2024; 11
Startsev, Zemblys (b0130) 2023; 55
Yang, Gao, Song, Song, Mao, Liu (b0150) 2022; 26
Zhang, Tang, Guan (b0170) 2022; 130
Yin, Wu, Yang, Li, Li, Liang, Lv (b0160) 2024; 73
Bai, Li, Li, Song, Gao, Mao (b0010) 2023; 11
Gu, Cai, Gao, Jiang, Ning, Qian (b0045) 2022; 9
Liu, Tuzel (b0100) 2016
Yu, Wang, Chen, Huang (b0165) 2019
Li, Zhang, Tiwari, Song, Hu, Yang, Zhao, Kumar, Marttinen (b0080) 2022; 55
Li (b0055) Oct. 2019; 66
Li, Liu, Yang, Hou, Song, Song, Gao, Mao (b0075) 2023; 31
Wang (b0135) 2021
Chen (b0025) 2023; 17
Li (b0060) 2022; 143
Shanechi (b0125) 2019; 22
Wang (b0140) 2022; 9
Lu, Zheng, Li, Lu (b0110) 2015; 15
Lan, Liu, Bao-Liang (b0050) 2020
Liu (b0085) 2019
Wu (b0145) 2022; 19
Liu, Breuel, Kautz (b0095) 2017
Yang, Li, Hou, Song, Gao (b0155) 2024; 71
Liu (b0090) 2021; 14
Li (10.1016/j.eswa.2024.124001_b0065) 2023; 14
Zheng (10.1016/j.eswa.2024.124001_b0185) 2018; 49
Li (10.1016/j.eswa.2024.124001_b0055) 2019; 66
Long (10.1016/j.eswa.2024.124001_b0105) 2018
Liu (10.1016/j.eswa.2024.124001_b0100) 2016
Ma (10.1016/j.eswa.2024.124001_b0115) 2023; 31
Quan (10.1016/j.eswa.2024.124001_b0120) 2023; 84
Li (10.1016/j.eswa.2024.124001_b0060) 2022; 143
Çelik (10.1016/j.eswa.2024.124001_b0020) 2021; 104
Li (10.1016/j.eswa.2024.124001_b0070) 2023; 276
Wang (10.1016/j.eswa.2024.124001_b0140) 2022; 9
Wang (10.1016/j.eswa.2024.124001_b0135) 2021
Chen (10.1016/j.eswa.2024.124001_b0030) 2023; 80
Startsev (10.1016/j.eswa.2024.124001_b0130) 2023; 55
Gautier (10.1016/j.eswa.2024.124001_b0035) 2023; 235
Wu (10.1016/j.eswa.2024.124001_b0145) 2022; 19
Zheng (10.1016/j.eswa.2024.124001_b0190) 2015; 7
Bayoudh (10.1016/j.eswa.2024.124001_b0015) 2021
Yang (10.1016/j.eswa.2024.124001_b0150) 2022; 26
Zhang (10.1016/j.eswa.2024.124001_b0170) 2022; 130
Zhao (10.1016/j.eswa.2024.124001_b0175) 2019
Gu (10.1016/j.eswa.2024.124001_b0045) 2022; 9
Shanechi (10.1016/j.eswa.2024.124001_b0125) 2019; 22
Li (10.1016/j.eswa.2024.124001_b0080) 2022; 55
Yin (10.1016/j.eswa.2024.124001_b0160) 2024; 73
Zheng (10.1016/j.eswa.2024.124001_b0180) 2023; 23
Yang (10.1016/j.eswa.2024.124001_b0155) 2024; 71
Gong (10.1016/j.eswa.2024.124001_b0040) 2024; 11
Liu (10.1016/j.eswa.2024.124001_b0085) 2019
Liu (10.1016/j.eswa.2024.124001_b0095) 2017
Lu (10.1016/j.eswa.2024.124001_b0110) 2015; 15
Li (10.1016/j.eswa.2024.124001_b0075) 2023; 31
Liu (10.1016/j.eswa.2024.124001_b0090) 2021; 14
Al-Quraishi (10.1016/j.eswa.2024.124001_b0005) 2021; 21
Lan (10.1016/j.eswa.2024.124001_b0050) 2020
Bai (10.1016/j.eswa.2024.124001_b0010) 2023; 11
Chen (10.1016/j.eswa.2024.124001_b0025) 2023; 17
Yu (10.1016/j.eswa.2024.124001_b0165) 2019
References_xml – volume: 21
  start-page: 27640
  year: 2021
  end-page: 37650
  ident: b0005
  article-title: Multi-modal fusion approach based on EEG and EMG signals for lower limb movement recognition
  publication-title: IEEE Sensors Journal
– volume: 276
  year: 2023
  ident: b0070
  article-title: MTLFuseNet: A novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning
  publication-title: Knowledge-based Systems
– volume: 71
  start-page: 1526
  year: 2024
  end-page: 1530
  ident: b0155
  article-title: Deep feature extraction and attention fusion for multimodal emotion recognition
  publication-title: IEEE Transactions on Circuits and Systems II: Express Briefs
– volume: 143
  year: 2022
  ident: b0060
  article-title: Emotion recognition from EEG based on multi-task learning with capsule network and attention mechanism
  publication-title: Computers in Biology and Medicine
– volume: 14
  start-page: 2512
  year: 2023
  end-page: 2525
  ident: b0065
  article-title: GMSS: Graph-based multi-task self-supervised learning for EEG emotion recognition
  publication-title: IEEE Transactions on Affective Computing
– volume: 130
  year: 2022
  ident: b0170
  article-title: Visual-to-EEG cross-modal knowledge distillation for continuous emotion recognition
  publication-title: Pattern Recognition
– volume: 26
  start-page: 589
  year: 2022
  end-page: 599
  ident: b0150
  article-title: Investigating of deaf emotion cognition pattern by EEG and facial expression combination
  publication-title: IEEE Journal of Biomedical and Health Informatics
– year: 2019
  ident: b0085
  article-title: Multimodal emotion recognition using deep canonical correlation analysis
  publication-title: arXiv preprint arXiv
– start-page: 778
  year: 2019
  end-page: 786
  ident: b0165
  article-title: Transfer learning with dynamic adversarial adaptation network
  publication-title: Proceedings IEEE International Conference on Data Mining
– volume: 49
  start-page: 1110
  year: 2018
  end-page: 1122
  ident: b0185
  article-title: Emotionmeter: A multi-modal framework for recognizing human emotions
  publication-title: IEEE Transactions of Cybernetics
– year: 2020
  ident: b0050
  article-title: Multimodal emotion recognition using deep generalized canonical correlation analysis with an attention mechanism
  publication-title: 2020 International Joint Conference on Neural Networks (IJCNN)
– volume: 55
  start-page: 1653
  year: 2023
  end-page: 1714
  ident: b0130
  article-title: Evaluating eye movement event detection: A review of the state of the art
  publication-title: Behav Res
– volume: 7
  start-page: 162
  year: 2015
  end-page: 175
  ident: b0190
  article-title: Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks
  publication-title: IEEE Transactions on Autonomous Mental Development
– volume: 22
  start-page: 1554
  year: 2019
  end-page: 1564
  ident: b0125
  article-title: Brain-machine interfaces from motor to mood
  publication-title: Nature Neuroscience
– volume: 11
  start-page: 2014
  year: 2024
  end-page: 2025
  ident: b0040
  article-title: Cross-cultural emotion recognition with EEG and eye movement signals based on multiple stacked broad learning system
  publication-title: IEEE Transactions on Computational Social Systems
– volume: 15
  start-page: 1170
  year: 2015
  end-page: 1176
  ident: b0110
  article-title: Combining eye movements and EEG to enhance emotion recognition
  publication-title: IJCAI’15
– volume: 66
  start-page: 2869
  year: Oct. 2019
  end-page: 2881
  ident: b0055
  article-title: EEG based emotion recognition by combining functional connectivity network and local activations
  publication-title: I.E.E.E. Transactions on Bio-Medical Engineering
– volume: 55
  start-page: 1
  year: 2022
  end-page: 57
  ident: b0080
  article-title: EEG based emotion recognition: A tutorial and review
  publication-title: ACM Computing Surveys
– start-page: 469
  year: 2016
  end-page: 477
  ident: b0100
  article-title: Coupled generative adversarial networks
– volume: 80
  year: 2023
  ident: b0030
  article-title: Similarity constraint style transfer map** for emotion recognition
  publication-title: Biomedical Signal Processing and Control
– start-page: 1640
  year: 2018
  end-page: 1650
  ident: b0105
  article-title: Conditional adversarial domain adaptation
– volume: 11
  start-page: 55023
  year: 2023
  end-page: 55034
  ident: b0010
  article-title: Domain-adaptive emotion recognition based on horizontal vertical flow representation of EEG signals
  publication-title: IEEE Access
– volume: 31
  start-page: 936
  year: 2023
  end-page: 943
  ident: b0115
  article-title: Cross-subject emotion recognition based on domain similarity of EEG signal transfer learning
  publication-title: IEEE Transactions on Neural Systems and Rehabilitation Engineering
– volume: 23
  start-page: 1028
  year: 2023
  ident: b0180
  article-title: Differential audiovisual information processing in emotion recognition: An eye-tracking study
  publication-title: Emotion
– start-page: 700
  year: 2017
  end-page: 708
  ident: b0095
  article-title: Unsupervised image-to-image translation networks
– volume: 17
  start-page: 671
  year: 2023
  end-page: 680
  ident: b0025
  article-title: A multi-stage dynamical fusion network for multimodal emotion recognition
  publication-title: Cognitive Neurodynamics
– volume: 73
  start-page: 1
  year: 2024
  end-page: 12
  ident: b0160
  article-title: Research on multimodal emotion recognition based on fusion of electroencephalogram and electrooculography
– volume: 9
  start-page: 1612
  year: 2022
  end-page: 1626
  ident: b0140
  article-title: Multi-modal domain adaptation variational autoencoder for eeg-based emotion recognition
  publication-title: IEEE/CAA Journal of Automatica Sinica
– volume: 235
  year: 2023
  ident: b0035
  article-title: Eyes don't lie: Eye movements differ during covert and overt autobiographical recall
  publication-title: Cognition
– year: 2021
  ident: b0135
  article-title: Emotion transformer fusion: Complementary representation properties of EEG and eye movements on recognizing anger and surprise
  publication-title: 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
– start-page: 1
  year: 2021
  end-page: 32
  ident: b0015
  article-title: A survey on deep multimodal learning for computer vision: Advances, trends, applications, and dataset
  publication-title: The Visual Computer
– volume: 19
  year: 2022
  ident: b0145
  article-title: Investigating EEG-based functional connectivity patterns for multimodal emotion recognition
  publication-title: Journal of Neural Engineering
– volume: 31
  start-page: 437
  year: 2023
  end-page: 445
  ident: b0075
  article-title: Emotion recognition of subjects with hearing impairment based on fusion of facial expression and EEG topographic map
  publication-title: IEEE Transactions on Neural Systems and Rehabilitation Engineering
– volume: 104
  start-page: 855
  year: 2021
  end-page: 873
  ident: b0020
  article-title: Wasserstein distance to independence models
  publication-title: Journal of Symbolic Computation
– volume: 9
  start-page: 1604
  year: 2022
  end-page: 1612
  ident: b0045
  article-title: Multi-source domain transfer discriminative dictionary learning modeling for electroencephalogram-based emotion recognition
  publication-title: IEEE Transactions on Computational Social Systems
– volume: 14
  start-page: 715
  year: 2021
  end-page: 729
  ident: b0090
  article-title: Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition
  publication-title: IEEE Transactions on Cognitive and Developmental Systems
– volume: 84
  year: 2023
  ident: b0120
  article-title: EEG-based cross-subject emotion recognition using multi-source domain transfer learning
  publication-title: Biomedical Signal Processing and Control
– year: 2019
  ident: b0175
  article-title: Classification of five emotions from EEG and eye movement data: Complementary representation properties
  publication-title: 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER)
– start-page: 778
  year: 2019
  ident: 10.1016/j.eswa.2024.124001_b0165
  article-title: Transfer learning with dynamic adversarial adaptation network
– volume: 84
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0120
  article-title: EEG-based cross-subject emotion recognition using multi-source domain transfer learning
  publication-title: Biomedical Signal Processing and Control
  doi: 10.1016/j.bspc.2023.104741
– volume: 73
  start-page: 1
  year: 2024
  ident: 10.1016/j.eswa.2024.124001_b0160
  article-title: Research on multimodal emotion recognition based on fusion of electroencephalogram and electrooculography
  publication-title: IEEE Transactions on Instrumentation and Measurement
  doi: 10.1109/TIM.2024.3488141
– volume: 31
  start-page: 437
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0075
  article-title: Emotion recognition of subjects with hearing impairment based on fusion of facial expression and EEG topographic map
  publication-title: IEEE Transactions on Neural Systems and Rehabilitation Engineering
  doi: 10.1109/TNSRE.2022.3225948
– volume: 15
  start-page: 1170
  year: 2015
  ident: 10.1016/j.eswa.2024.124001_b0110
  article-title: Combining eye movements and EEG to enhance emotion recognition
  publication-title: IJCAI’15
– volume: 17
  start-page: 671
  issue: 3
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0025
  article-title: A multi-stage dynamical fusion network for multimodal emotion recognition
  publication-title: Cognitive Neurodynamics
  doi: 10.1007/s11571-022-09851-w
– year: 2019
  ident: 10.1016/j.eswa.2024.124001_b0085
  article-title: Multimodal emotion recognition using deep canonical correlation analysis
  publication-title: arXiv preprint arXiv
– volume: 235
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0035
  article-title: Eyes don't lie: Eye movements differ during covert and overt autobiographical recall
  publication-title: Cognition
  doi: 10.1016/j.cognition.2023.105416
– volume: 14
  start-page: 715
  issue: 2
  year: 2021
  ident: 10.1016/j.eswa.2024.124001_b0090
  article-title: Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition
  publication-title: IEEE Transactions on Cognitive and Developmental Systems
  doi: 10.1109/TCDS.2021.3071170
– volume: 9
  start-page: 1604
  issue: 6
  year: 2022
  ident: 10.1016/j.eswa.2024.124001_b0045
  article-title: Multi-source domain transfer discriminative dictionary learning modeling for electroencephalogram-based emotion recognition
  publication-title: IEEE Transactions on Computational Social Systems
  doi: 10.1109/TCSS.2022.3153660
– volume: 55
  start-page: 1
  issue: 4
  year: 2022
  ident: 10.1016/j.eswa.2024.124001_b0080
  article-title: EEG based emotion recognition: A tutorial and review
  publication-title: ACM Computing Surveys
  doi: 10.1145/3524499
– volume: 26
  start-page: 589
  issue: 2
  year: 2022
  ident: 10.1016/j.eswa.2024.124001_b0150
  article-title: Investigating of deaf emotion cognition pattern by EEG and facial expression combination
  publication-title: IEEE Journal of Biomedical and Health Informatics
  doi: 10.1109/JBHI.2021.3092412
– volume: 11
  start-page: 2014
  issue: 2
  year: 2024
  ident: 10.1016/j.eswa.2024.124001_b0040
  article-title: Cross-cultural emotion recognition with EEG and eye movement signals based on multiple stacked broad learning system
  publication-title: IEEE Transactions on Computational Social Systems
  doi: 10.1109/TCSS.2023.3298324
– start-page: 1
  year: 2021
  ident: 10.1016/j.eswa.2024.124001_b0015
  article-title: A survey on deep multimodal learning for computer vision: Advances, trends, applications, and dataset
  publication-title: The Visual Computer
– year: 2020
  ident: 10.1016/j.eswa.2024.124001_b0050
  article-title: Multimodal emotion recognition using deep generalized canonical correlation analysis with an attention mechanism
– volume: 276
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0070
  article-title: MTLFuseNet: A novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning
  publication-title: Knowledge-based Systems
  doi: 10.1016/j.knosys.2023.110756
– year: 2021
  ident: 10.1016/j.eswa.2024.124001_b0135
  article-title: Emotion transformer fusion: Complementary representation properties of EEG and eye movements on recognizing anger and surprise
– volume: 71
  start-page: 1526
  issue: 3
  year: 2024
  ident: 10.1016/j.eswa.2024.124001_b0155
  article-title: Deep feature extraction and attention fusion for multimodal emotion recognition
  publication-title: IEEE Transactions on Circuits and Systems II: Express Briefs
– volume: 23
  start-page: 1028
  issue: 4
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0180
  article-title: Differential audiovisual information processing in emotion recognition: An eye-tracking study
  publication-title: Emotion
  doi: 10.1037/emo0001144
– volume: 31
  start-page: 936
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0115
  article-title: Cross-subject emotion recognition based on domain similarity of EEG signal transfer learning
  publication-title: IEEE Transactions on Neural Systems and Rehabilitation Engineering
  doi: 10.1109/TNSRE.2023.3236687
– volume: 66
  start-page: 2869
  issue: 10
  year: 2019
  ident: 10.1016/j.eswa.2024.124001_b0055
  article-title: EEG based emotion recognition by combining functional connectivity network and local activations
  publication-title: I.E.E.E. Transactions on Bio-Medical Engineering
– volume: 130
  year: 2022
  ident: 10.1016/j.eswa.2024.124001_b0170
  article-title: Visual-to-EEG cross-modal knowledge distillation for continuous emotion recognition
  publication-title: Pattern Recognition
  doi: 10.1016/j.patcog.2022.108833
– volume: 7
  start-page: 162
  issue: 3
  year: 2015
  ident: 10.1016/j.eswa.2024.124001_b0190
  article-title: Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks
  publication-title: IEEE Transactions on Autonomous Mental Development
  doi: 10.1109/TAMD.2015.2431497
– volume: 9
  start-page: 1612
  issue: 9
  year: 2022
  ident: 10.1016/j.eswa.2024.124001_b0140
  article-title: Multi-modal domain adaptation variational autoencoder for eeg-based emotion recognition
  publication-title: IEEE/CAA Journal of Automatica Sinica
  doi: 10.1109/JAS.2022.105515
– volume: 11
  start-page: 55023
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0010
  article-title: Domain-adaptive emotion recognition based on horizontal vertical flow representation of EEG signals
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2023.3270977
– volume: 143
  year: 2022
  ident: 10.1016/j.eswa.2024.124001_b0060
  article-title: Emotion recognition from EEG based on multi-task learning with capsule network and attention mechanism
  publication-title: Computers in Biology and Medicine
  doi: 10.1016/j.compbiomed.2022.105303
– volume: 21
  start-page: 27640
  issue: 24
  year: 2021
  ident: 10.1016/j.eswa.2024.124001_b0005
  article-title: Multi-modal fusion approach based on EEG and EMG signals for lower limb movement recognition
  publication-title: IEEE Sensors Journal
  doi: 10.1109/JSEN.2021.3119074
– volume: 104
  start-page: 855
  year: 2021
  ident: 10.1016/j.eswa.2024.124001_b0020
  article-title: Wasserstein distance to independence models
  publication-title: Journal of Symbolic Computation
  doi: 10.1016/j.jsc.2020.10.005
– start-page: 1640
  year: 2018
  ident: 10.1016/j.eswa.2024.124001_b0105
  article-title: Conditional adversarial domain adaptation
– volume: 55
  start-page: 1653
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0130
  article-title: Evaluating eye movement event detection: A review of the state of the art
  publication-title: Behav Res
  doi: 10.3758/s13428-021-01763-7
– volume: 14
  start-page: 2512
  issue: 3
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0065
  article-title: GMSS: Graph-based multi-task self-supervised learning for EEG emotion recognition
  publication-title: IEEE Transactions on Affective Computing
  doi: 10.1109/TAFFC.2022.3170428
– volume: 22
  start-page: 1554
  issue: 10
  year: 2019
  ident: 10.1016/j.eswa.2024.124001_b0125
  article-title: Brain-machine interfaces from motor to mood
  publication-title: Nature Neuroscience
  doi: 10.1038/s41593-019-0488-y
– year: 2019
  ident: 10.1016/j.eswa.2024.124001_b0175
  article-title: Classification of five emotions from EEG and eye movement data: Complementary representation properties
– volume: 80
  year: 2023
  ident: 10.1016/j.eswa.2024.124001_b0030
  article-title: Similarity constraint style transfer map** for emotion recognition
  publication-title: Biomedical Signal Processing and Control
  doi: 10.1016/j.bspc.2022.104314
– start-page: 469
  year: 2016
  ident: 10.1016/j.eswa.2024.124001_b0100
  article-title: Coupled generative adversarial networks
– volume: 49
  start-page: 1110
  issue: 3
  year: 2018
  ident: 10.1016/j.eswa.2024.124001_b0185
  article-title: Emotionmeter: A multi-modal framework for recognizing human emotions
  publication-title: IEEE Transactions of Cybernetics
  doi: 10.1109/TCYB.2018.2797176
– start-page: 700
  year: 2017
  ident: 10.1016/j.eswa.2024.124001_b0095
  article-title: Unsupervised image-to-image translation networks
– volume: 19
  issue: 1
  year: 2022
  ident: 10.1016/j.eswa.2024.124001_b0145
  article-title: Investigating EEG-based functional connectivity patterns for multimodal emotion recognition
  publication-title: Journal of Neural Engineering
  doi: 10.1088/1741-2552/ac49a7
SSID ssj0017007
Score 2.493564
Snippet The joint learning of multimodal is helpful to extract the general information cross-modality in improving the performance of multimodal emotion recognition....
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 124001
SubjectTerms Knowledge transfer
Multi-task learning
Multidimensional homogeneous encoding space
Multimodal joint learning
Title EEG-eye movement based subject dependence, cross-subject, and cross-session emotion recognition with multidimensional homogeneous encoding space alignment
URI https://dx.doi.org/10.1016/j.eswa.2024.124001
Volume 251
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwELYQLCy8EeVReWCjpk3sxMlYVS0FBAtUYoti5wxF0CDaCrHwQ_i1-GKnAgkxsOVhJ5Hv4u9sf_6OkOOO0DKAWLMEpB2gcC1ZyhPBUoWraCbVQaUze3UdD0fi4i66WyK9ei8M0ip93-_69Kq39lfavjXbL-Nx-8YGBxYO7dBOYFwuURNUCIlefvqxoHmg_Jx0enuSYWm_ccZxvGD6htpDoTgN8CnB7-D0DXAGG2TNR4q06z5mkyzBZIus11kYqP8pt8lnv3_G4B3oc1lpf88oIlNBp3OFcyy0TnOroUWr1zJ_p0XzSVFfcvIcFFxWH7rgFdljnKqlFfGwwFQATsaDPpTPpXU-KOdTimKYiIHUdk8aqA3t7yuSwQ4ZDfq3vSHzGReY5p3OjBkL-CqMpeaR4hEoaVTK81SFRskiMNLGfyABF-8SY4o8CBVPLdyZIEpiiNKE75LlSTmBPULB8KjoGBMqYYc4qchVXkRFAknMZW6jogYJ6qbOtJcjx6wYT1nNO3vM0DwZmidz5mmQk0WdFyfG8WfpqLZg9sOlMosWf9Tb_2e9A7KKZ47pd0iWZ69zOLIRy0w1K5dskpXu-eXw-guDmu2o
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6VcoALb9QX4AOcqLub2ImTAwcEW7b0caGVeguxM4aidlN1d1X1wg_p3-APMhM7K5BQD0i9RXacOB5nHvbnbwBeD7UzCeZOFmgoQFHOyFIVWpaWd9F86ZKOZ3b_IB8f6c_H2fES_OrPwjCsMur-oNM7bR1LBnE0B-cnJ4Mv5ByQOaTQTrNfbnREVu7i1SXFbdN3Ox9JyG_SdHt0-GEsY2oB6dRwOJOeLJtNc-NUZlWG1nhbqrq0qbemSbwhRwcN8i5V4X1TJ6lVJel1n2RFjllZKHruHbirSV1w2oStnwtcCfPdmUDwZyR3L57UCaAynF4y2VGqtxLudvJva_iHhdt-BA-iayreh69_DEs4eQIP-7QPImqBp3A9Gn2SeIXirO3IxmeCTWEjpnPLizqiz6vrcFN0r5WxZlPUk6YvCnwgAkMaIbEAMtE1rw2LDunYcO6BwBsivrdnLc12bOdTweybbHQF6UOHgmKJbx2q4Rkc3YocnsPypJ3gCgj0KmuG3qdWU0xV6trWTdYUWOTK1OSGrULSD3XlIv85p-E4rXqg24-KxVOxeKognlV4u2hzHtg_brw76yVY_TWHKzJPN7Rb-892r-De-HB_r9rbOdhdh_tcE2CGG7A8u5jjC3KXZvZlNz0FfL3t_-E3CyQo5w
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=EEG-eye+movement+based+subject+dependence%2C+cross-subject%2C+and+cross-session+emotion+recognition+with+multidimensional+homogeneous+encoding+space+alignment&rft.jtitle=Expert+systems+with+applications&rft.au=Zhu%2C+Mu&rft.au=Wu%2C+Qingzhou&rft.au=Bai%2C+Zhongli&rft.au=Song%2C+Yu&rft.date=2024-10-01&rft.pub=Elsevier+Ltd&rft.issn=0957-4174&rft.eissn=1873-6793&rft.volume=251&rft_id=info:doi/10.1016%2Fj.eswa.2024.124001&rft.externalDocID=S0957417424008674
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0957-4174&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0957-4174&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0957-4174&client=summon