Comprehensive Multisource Learning Network for Cross-Subject Multimodal Emotion Recognition
Electroencephalography (EEG) signals and eye movement signals, which represent internal physiological responses and external subconscious behaviors, respectively, have been shown to be reliable indicators for recognizing emotions. However, integrating these two modalities across multiple subjects pr...
Saved in:
Published in | IEEE transactions on emerging topics in computational intelligence Vol. 9; no. 1; pp. 365 - 380 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.02.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Electroencephalography (EEG) signals and eye movement signals, which represent internal physiological responses and external subconscious behaviors, respectively, have been shown to be reliable indicators for recognizing emotions. However, integrating these two modalities across multiple subjects presents several challenges: 1) designing a robust consistency metric that balances the consistency and divergences between heterogeneous modalities across multiple subjects; 2) simultaneously considering intra-modality and inter-modality information across multiple subjects; and 3) overcoming individual differences among multiple subjects and generating subject-invariant representations of the multimodal fused features. To address these challenges associated with multisource data (i.e., multiple modalities and subjects), we propose a novel comprehensive multisource learning network (CMSLNet) for cross-subject multimodal emotion recognition. Specifically, an instance-level adaptive robust consistency metric is first designed to better align the information between EEG signals and eye movement signals, identifying their consistency and divergences across various emotions. Subsequently, an attentive low-rank multimodal fusion (Att-LMF) method is developed to account for individual differences and dynamically learn intra-modality and inter-modality information, resulting in highly discriminative fused features. Finally, domain generalization is utilized to extract subject-invariant representations of the fused features, thus adapting to new subjects and enhancing the model's generalization. Through these elaborate designs, CMSLNet effectively incorporates the information from multisource data, thus significantly improving the accuracy and reliability of emotion recognition. Extensive experiments on two public datasets demonstrate the superior performance of CMSLNet. CMSLNet achieves high accuracies of 83.15% on the SEED-IV dataset and 87.32% on the SEED-V dataset, surpassing the state-of-the-art methods by 3.62% and 4.60%, respectively. |
---|---|
AbstractList | Electroencephalography (EEG) signals and eye movement signals, which represent internal physiological responses and external subconscious behaviors, respectively, have been shown to be reliable indicators for recognizing emotions. However, integrating these two modalities across multiple subjects presents several challenges: 1) designing a robust consistency metric that balances the consistency and divergences between heterogeneous modalities across multiple subjects; 2) simultaneously considering intra-modality and inter-modality information across multiple subjects; and 3) overcoming individual differences among multiple subjects and generating subject-invariant representations of the multimodal fused features. To address these challenges associated with multisource data (i.e., multiple modalities and subjects), we propose a novel comprehensive multisource learning network (CMSLNet) for cross-subject multimodal emotion recognition. Specifically, an instance-level adaptive robust consistency metric is first designed to better align the information between EEG signals and eye movement signals, identifying their consistency and divergences across various emotions. Subsequently, an attentive low-rank multimodal fusion (Att-LMF) method is developed to account for individual differences and dynamically learn intra-modality and inter-modality information, resulting in highly discriminative fused features. Finally, domain generalization is utilized to extract subject-invariant representations of the fused features, thus adapting to new subjects and enhancing the model's generalization. Through these elaborate designs, CMSLNet effectively incorporates the information from multisource data, thus significantly improving the accuracy and reliability of emotion recognition. Extensive experiments on two public datasets demonstrate the superior performance of CMSLNet. CMSLNet achieves high accuracies of 83.15% on the SEED-IV dataset and 87.32% on the SEED-V dataset, surpassing the state-of-the-art methods by 3.62% and 4.60%, respectively. |
Author | Chen, Chuangquan Li, Chen Vong, Chi-Man Wang, Hongtao Li, Zhencheng Kou, Kit Ian Du, Jie |
Author_xml | – sequence: 1 givenname: Chuangquan orcidid: 0000-0002-3811-296X surname: Chen fullname: Chen, Chuangquan email: chenchuangquan87@163.com organization: School of Electronics and Information Engineering, Wuyi University, Jiangmen, China – sequence: 2 givenname: Zhencheng orcidid: 0000-0002-8359-8225 surname: Li fullname: Li, Zhencheng email: lizhencheng97@163.com organization: School of Electronics and Information Engineering, Wuyi University, Jiangmen, China – sequence: 3 givenname: Kit Ian orcidid: 0000-0003-1924-9087 surname: Kou fullname: Kou, Kit Ian email: kikou@um.edu.mo organization: Department of Mathematics, University of Macau, Macao, China – sequence: 4 givenname: Jie orcidid: 0000-0003-1518-436X surname: Du fullname: Du, Jie email: dujie@szu.edu.cn organization: School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China – sequence: 5 givenname: Chen orcidid: 0000-0001-7634-7834 surname: Li fullname: Li, Chen email: fslichen@fosu.edu.cn organization: School of Mathematics and Big Data, Foshan University, Foshan, China – sequence: 6 givenname: Hongtao orcidid: 0000-0002-6564-5753 surname: Wang fullname: Wang, Hongtao email: nushongtaowang@qq.com organization: School of Electronics and Information Engineering, Wuyi University, Jiangmen, China – sequence: 7 givenname: Chi-Man orcidid: 0000-0001-7997-8279 surname: Vong fullname: Vong, Chi-Man email: cmvong@um.edu.mo organization: Department of Computer and Information Science, University of Macau, Macao, China |
BookMark | eNpNUNFKwzAUDaLgnPsB8aHgc2du0nTto4ypg6mgEwQfQpbezs41mUmr-Pemdg97ugfuOfeec87IsbEGCbkAOgag-fVytpzOx4yyZMwTmiaMHZEBSyYQs0y8HR_gUzLyfkMpZbkALpIBeZ_aeufwA42vvjF6aLdN5W3rNEYLVM5UZh09YvNj3WdUWhdNnfU-fmlXG9RNT69tobbRrLZNZU30jNquTdXhc3JSqq3H0X4OyettsHofL57u5tObRayDsSZOeSp0HgzBSinAFDgmoHQieFrygnJFIeUCdMGLkmVhtVJcF2WGGZQFiIIPyVV_d-fsV4u-kZuQwISXkoPIBeWMZ4HFepbuIjgs5c5VtXK_EqjsepT_PcquR7nvMYgue1GFiAcCMRF5uPoHwZ9xzw |
CODEN | ITETCU |
Cites_doi | 10.18653/v1/D17-1115 10.1007/978-3-030-36708-4_3 10.1109/BIBM52615.2021.9669556 10.1214/aoms/1177703732 10.1109/jas.2022.105515 10.1109/taffc.2020.2981440 10.1109/NER49283.2021.9441352 10.5244/C.21.43 10.1145/3474085.3475583 10.5555/2969033.2969125 10.1109/tim.2022.3168927 10.1177/1557234X11410385 10.1109/NER.2019.8716943 10.1109/IJCNN48605.2020.9207625 10.1109/TKDE.2022.3178128 10.1109/TIM.2020.3011817 10.1038/s41593-019-0488-y 10.1007/978-3-030-01261-8_1 10.1109/EMBC.2014.6944757 10.1109/TNSRE.2021.3110665 10.1016/j.knosys.2021.107982 10.1109/TAFFC.2020.3008775 10.5555/3045118.3045167 10.5555/2946645.2946704 10.3389/fnins.2021.778488 10.1007/978-3-319-46672-9_58 10.1109/tcds.2019.2949306 10.1016/j.aei.2020.101095 10.1109/taffc.2020.2994159 10.1109/taffc.2017.2786207 10.1007/978-3-030-04221-9_25 10.24963/ijcai.2019/568 10.1109/TCYB.2018.2797176 10.1016/j.patcog.2022.108833 10.1109/TNNLS.2018.2838140 10.1109/tcds.2021.3071170 10.18653/v1/P18-1209 10.1109/IJCNN.2019.8852347 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025 |
DBID | 97E RIA RIE AAYXX CITATION 7SP 8FD L7M |
DOI | 10.1109/TETCI.2024.3406422 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Electronics & Communications Abstracts Technology Research Database Advanced Technologies Database with Aerospace |
DatabaseTitle | CrossRef Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
EISSN | 2471-285X |
EndPage | 380 |
ExternalDocumentID | 10_1109_TETCI_2024_3406422 10575932 |
Genre | orig-research |
GrantInformation_xml | – fundername: Chinese Guangdong's S&T Project grantid: 2022A0505020028 – fundername: Basic and Applied Basic Research Foundation of Guangdong Province; Guangdong Basic and Applied Basic Research Foundation grantid: 2023A1515011978; 2020A1515111154; 2022A1515010160 funderid: 10.13039/501100021171 – fundername: National Natural Science Foundation of China grantid: 62201402 funderid: 10.13039/501100001809 – fundername: Projects for International Scientific and Technological Cooperation of Guangdong Province grantid: 2023A0505050144 – fundername: Hong Kong and Macau Joint Research and Development Fund of Wuyi University grantid: 2021WGALH19 – fundername: Science and Technology Development Fund, Macau S.A.R grantid: 0036/2021/AGJ – fundername: Department of Education of Guangdong Province; Educational Commission of Guangdong Province grantid: 2021KTSCX136 funderid: 10.13039/501100010226 |
GroupedDBID | 0R~ 97E AAJGR AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG ACGFS AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE JAVBF OCL RIA RIE AAYXX CITATION RIG 7SP 8FD L7M |
ID | FETCH-LOGICAL-c247t-6365c99511baa1e613e41ac4536f3d03a016351cd3df2841aba3cdf8e81fd15d3 |
IEDL.DBID | RIE |
ISSN | 2471-285X |
IngestDate | Mon Jun 30 12:58:07 EDT 2025 Tue Jul 01 03:15:51 EDT 2025 Wed Aug 27 01:56:41 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c247t-6365c99511baa1e613e41ac4536f3d03a016351cd3df2841aba3cdf8e81fd15d3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-3811-296X 0000-0001-7634-7834 0000-0002-6564-5753 0000-0001-7997-8279 0000-0003-1924-9087 0000-0003-1518-436X 0000-0002-8359-8225 |
PQID | 3159503238 |
PQPubID | 4437216 |
PageCount | 16 |
ParticipantIDs | crossref_primary_10_1109_TETCI_2024_3406422 ieee_primary_10575932 proquest_journals_3159503238 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2025-02-01 |
PublicationDateYYYYMMDD | 2025-02-01 |
PublicationDate_xml | – month: 02 year: 2025 text: 2025-02-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Piscataway |
PublicationPlace_xml | – name: Piscataway |
PublicationTitle | IEEE transactions on emerging topics in computational intelligence |
PublicationTitleAbbrev | TETCI |
PublicationYear | 2025 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref35 ref34 ref15 ref37 ref14 ref36 ref31 ref30 ref11 ref10 ref32 ref2 ref1 ref17 ref39 ref16 ref19 ref18 De Bie (ref12) Li (ref33) 2022 ref24 Ganin (ref43) Andrew (ref13) ref23 ref45 ref26 ref25 ref47 ref20 ref42 ref22 ref44 Van der Maaten (ref46) 2008; 9 ref21 Qiao (ref41) 2019 ref27 Li (ref38) 2019 ref29 ref8 ref7 ref9 ref4 ref3 ref6 Zheng (ref28) ref40 Lu (ref5); 15 |
References_xml | – ident: ref22 doi: 10.18653/v1/D17-1115 – ident: ref34 doi: 10.1007/978-3-030-36708-4_3 – ident: ref10 doi: 10.1109/BIBM52615.2021.9669556 – start-page: 785 volume-title: Proc. Int. Symp. Independent Compon. Anal. Blind Signal Separation ident: ref12 article-title: On the regularization of canonical correlation analysis – ident: ref20 doi: 10.1214/aoms/1177703732 – volume: 9 start-page: 2579 issue: 11 year: 2008 ident: ref46 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – ident: ref8 doi: 10.1109/jas.2022.105515 – ident: ref27 doi: 10.1109/taffc.2020.2981440 – ident: ref7 doi: 10.1109/NER49283.2021.9441352 – ident: ref11 doi: 10.5244/C.21.43 – ident: ref25 doi: 10.1145/3474085.3475583 – ident: ref42 doi: 10.5555/2969033.2969125 – ident: ref44 doi: 10.1109/tim.2022.3168927 – ident: ref1 doi: 10.1177/1557234X11410385 – ident: ref9 doi: 10.1109/NER.2019.8716943 – start-page: 1247 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref13 article-title: Deep canonical correlation analysis – ident: ref26 doi: 10.1109/IJCNN48605.2020.9207625 – ident: ref19 doi: 10.1109/TKDE.2022.3178128 – ident: ref3 doi: 10.1109/TIM.2020.3011817 – year: 2019 ident: ref41 article-title: Micro-batch training with batch-channel normalization and weight standardization – ident: ref2 doi: 10.1038/s41593-019-0488-y – ident: ref40 doi: 10.1007/978-3-030-01261-8_1 – ident: ref4 doi: 10.1109/EMBC.2014.6944757 – ident: ref36 doi: 10.1109/TNSRE.2021.3110665 – ident: ref18 doi: 10.1016/j.knosys.2021.107982 – ident: ref17 doi: 10.1109/TAFFC.2020.3008775 – ident: ref39 doi: 10.5555/3045118.3045167 – ident: ref31 doi: 10.5555/2946645.2946704 – ident: ref47 doi: 10.3389/fnins.2021.778488 – ident: ref15 doi: 10.1007/978-3-319-46672-9_58 – ident: ref29 doi: 10.1109/tcds.2019.2949306 – ident: ref6 doi: 10.1016/j.aei.2020.101095 – ident: ref32 doi: 10.1109/taffc.2020.2994159 – year: 2019 ident: ref38 article-title: Spatial group-wise enhance: Improving semantic feature learning in convolutional networks – ident: ref16 doi: 10.1109/taffc.2017.2786207 – ident: ref30 doi: 10.1007/978-3-030-04221-9_25 – ident: ref37 doi: 10.24963/ijcai.2019/568 – ident: ref14 doi: 10.1109/TCYB.2018.2797176 – ident: ref24 doi: 10.1016/j.patcog.2022.108833 – ident: ref45 doi: 10.1109/TNNLS.2018.2838140 – volume: 15 start-page: 1170 volume-title: Proc. 24th Int. Joint Conf. Artif. Intell. ident: ref5 article-title: Combining eye movements and EEG to enhance emotion recognition – ident: ref23 doi: 10.1109/tcds.2021.3071170 – start-page: 2732 volume-title: Proc. 25th Int. Joint Conf. Artif. Intell. ident: ref28 article-title: Personalizing EEG-based affective models with transfer learning – ident: ref21 doi: 10.18653/v1/P18-1209 – start-page: 1180 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref43 article-title: Unsupervised domain adaptation by backpropagation – ident: ref35 doi: 10.1109/IJCNN.2019.8852347 – year: 2022 ident: ref33 article-title: Benchmarking domain generalization on EEG-based emotion recognition |
SSID | ssj0002951354 |
Score | 2.2956567 |
Snippet | Electroencephalography (EEG) signals and eye movement signals, which represent internal physiological responses and external subconscious behaviors,... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Index Database Publisher |
StartPage | 365 |
SubjectTerms | Brain modeling Correlation cross-subject Datasets domain generalization Electroencephalography Emotion recognition Emotions Eye movements Feature extraction Invariants Learning low-rank multimodal fusion Measurement multimodal emotion recognition Multisource learning Physiological responses Physiology Representations Robustness |
Title | Comprehensive Multisource Learning Network for Cross-Subject Multimodal Emotion Recognition |
URI | https://ieeexplore.ieee.org/document/10575932 https://www.proquest.com/docview/3159503238 |
Volume | 9 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LSwMxEA62Jy8-sGK1Sg7eJNvN5tHNUUpLFexBWih4WHaTrIK4lbq9-OudJLviA8HbHrJhmElmvpnMA6FL1_FSg10loH8V4UnOSZEqS0r3KiiNlcq4gP7dXM6W_HYlVk2xuq-Fsdb65DMbuU__lm_WeutCZUM3k1YA4OigDnhuoVjrM6CSAFZggreFMbEaLiaL8Q24gAmPGHdAO_lmfPw0lV8q2NuV6T6atxSFdJLnaFsXkX7_0azx3yQfoL0GYeLrcCQO0Y6tjtCDu_cb-xTS1bEvuw1xe9x0WH3E85ARjgHG4rGjn4BWcWGasPxlbWDbSRj7g-_bxKN11UPLKfBgRpq5CkQnfFQTyaTQCthFizynFgy65TTXXDBZMhOzHGAgE1QbZkqwXjQvcqZNmdqUloYKw45Rt1pX9gThAtw3qpQtR1JyanhqKC9dUElYUASp6aOrluHZa2ifkXm3I1aZF0_mxJM14umjnuPgl5WBeX00aIWUNVfsLWMAxETMAHKc_vHbGdpN3LRen2M9QN16s7XnACHq4sIfnQ9ZR8SA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELagDLDwEEUUCnhgQ07r2E6TEVVFLbQZUCtVYrCS2AEJkaCSLvx6znaCeAiJLYNjne7su-_O90Do0nS8zMCuEtC_EeF-wkkaRprk5lUwUDqIlAnoz-JgvOC3S7Gsi9VtLYzW2iafac982rd8VWZrEyrrmZm0AgDHJtoCwy-oK9f6DKn4gBaY4E1pTD_qzUfz4QScQJ97jBuo7X8zP3aeyi8lbC3LzR6KG5pcQsmzt65SL3v_0a7x30Tvo90aY-JrdygO0IYuDtGDufkr_eQS1rEtvHWRe1z3WH3EscsJxwBk8dDQT0CvmECNW_5SKth25Ab_4Psm9ags2mhxAzwYk3qyAsl8PqhIwAKRRcAumiYJ1WDSNadJxgULcqb6LAEgyATNFFM52C-apAnLVB7qkOaKCsWOUKsoC32McAoOHI0inQ-CgFPFQ0V5bsJKQoMqCFUHXTUMl6-ugYa0jkc_klY80ohH1uLpoLbh4JeVjnkd1G2EJOtL9iYZQDHRZwA6Tv747QJtj-ezqZxO4rtTtOOb2b0247qLWtVqrc8AUFTpuT1GH8Vgx8k |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Comprehensive+Multisource+Learning+Network+for+Cross-Subject+Multimodal+Emotion+Recognition&rft.jtitle=IEEE+transactions+on+emerging+topics+in+computational+intelligence&rft.au=Chen%2C+Chuangquan&rft.au=Li%2C+Zhencheng&rft.au=Kou%2C+Kit+Ian&rft.au=Du%2C+Jie&rft.date=2025-02-01&rft.pub=IEEE&rft.eissn=2471-285X&rft.volume=9&rft.issue=1&rft.spage=365&rft.epage=380&rft_id=info:doi/10.1109%2FTETCI.2024.3406422&rft.externalDocID=10575932 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2471-285X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2471-285X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2471-285X&client=summon |