Joint low-rank tensor fusion and cross-modal attention for multimodal physiological signals based emotion recognition

Objective . Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies predominantly focused on unimodal data, giving limited attention to the interplay among multiple modalities. Within the scope of multimodal emotion...

Full description

Saved in:
Bibliographic Details
Published inPhysiological measurement Vol. 45; no. 7; pp. 75003 - 75016
Main Authors Wan, Xin, Wang, Yongxiong, Wang, Zhe, Tang, Yiheng, Liu, Benke
Format Journal Article
LanguageEnglish
Published England IOP Publishing 01.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Objective . Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies predominantly focused on unimodal data, giving limited attention to the interplay among multiple modalities. Within the scope of multimodal emotion recognition, integrating the information from diverse modalities and leveraging the complementary information are the two essential issues to obtain the robust representations. Approach . Thus, we propose a intermediate fusion strategy for combining low-rank tensor fusion with the cross-modal attention to enhance the fusion of electroencephalogram, electrooculogram, electromyography, and galvanic skin response. Firstly, handcrafted features from distinct modalities are individually fed to corresponding feature extractors to obtain latent features. Subsequently, low-rank tensor is fused to integrate the information by the modality interaction representation. Finally, a cross-modal attention module is employed to explore the potential relationships between the distinct latent features and modality interaction representation, and recalibrate the weights of different modalities. And the resultant representation is adopted for emotion recognition. Main results . Furthermore, to validate the effectiveness of the proposed method, we execute subject-independent experiments within the DEAP dataset. The proposed method has achieved the accuracies of 73.82% and 74.55% for valence and arousal classification. Significance . The results of extensive experiments verify the outstanding performance of the proposed method.
AbstractList Objective. Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies predominantly focused on unimodal data, giving limited attention to the interplay among multiple modalities. Within the scope of multimodal emotion recognition, integrating the information from diverse modalities and leveraging the complementary information are the two essential issues to obtain the robust representations.Approach. Thus, we propose a intermediate fusion strategy for combining low-rank tensor fusion with the cross-modal attention to enhance the fusion of electroencephalogram, electrooculogram, electromyography, and galvanic skin response. Firstly, handcrafted features from distinct modalities are individually fed to corresponding feature extractors to obtain latent features. Subsequently, low-rank tensor is fused to integrate the information by the modality interaction representation. Finally, a cross-modal attention module is employed to explore the potential relationships between the distinct latent features and modality interaction representation, and recalibrate the weights of different modalities. And the resultant representation is adopted for emotion recognition.Main results. Furthermore, to validate the effectiveness of the proposed method, we execute subject-independent experiments within the DEAP dataset. The proposed method has achieved the accuracies of 73.82% and 74.55% for valence and arousal classification.Significance. The results of extensive experiments verify the outstanding performance of the proposed method.Objective. Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies predominantly focused on unimodal data, giving limited attention to the interplay among multiple modalities. Within the scope of multimodal emotion recognition, integrating the information from diverse modalities and leveraging the complementary information are the two essential issues to obtain the robust representations.Approach. Thus, we propose a intermediate fusion strategy for combining low-rank tensor fusion with the cross-modal attention to enhance the fusion of electroencephalogram, electrooculogram, electromyography, and galvanic skin response. Firstly, handcrafted features from distinct modalities are individually fed to corresponding feature extractors to obtain latent features. Subsequently, low-rank tensor is fused to integrate the information by the modality interaction representation. Finally, a cross-modal attention module is employed to explore the potential relationships between the distinct latent features and modality interaction representation, and recalibrate the weights of different modalities. And the resultant representation is adopted for emotion recognition.Main results. Furthermore, to validate the effectiveness of the proposed method, we execute subject-independent experiments within the DEAP dataset. The proposed method has achieved the accuracies of 73.82% and 74.55% for valence and arousal classification.Significance. The results of extensive experiments verify the outstanding performance of the proposed method.
Objective . Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies predominantly focused on unimodal data, giving limited attention to the interplay among multiple modalities. Within the scope of multimodal emotion recognition, integrating the information from diverse modalities and leveraging the complementary information are the two essential issues to obtain the robust representations. Approach . Thus, we propose a intermediate fusion strategy for combining low-rank tensor fusion with the cross-modal attention to enhance the fusion of electroencephalogram, electrooculogram, electromyography, and galvanic skin response. Firstly, handcrafted features from distinct modalities are individually fed to corresponding feature extractors to obtain latent features. Subsequently, low-rank tensor is fused to integrate the information by the modality interaction representation. Finally, a cross-modal attention module is employed to explore the potential relationships between the distinct latent features and modality interaction representation, and recalibrate the weights of different modalities. And the resultant representation is adopted for emotion recognition. Main results . Furthermore, to validate the effectiveness of the proposed method, we execute subject-independent experiments within the DEAP dataset. The proposed method has achieved the accuracies of 73.82% and 74.55% for valence and arousal classification. Significance . The results of extensive experiments verify the outstanding performance of the proposed method.
Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies predominantly focused on unimodal data, giving limited attention to the interplay among multiple modalities. Within the scope of multimodal emotion recognition, integrating the information from diverse modalities and leveraging the complementary information are the two essential issues to obtain the robust representations. Thus, we propose a intermediate fusion strategy for combining low-rank tensor fusion with the cross-modal attention to enhance the fusion of electroencephalogram (EEG), electrooculogram (EOG), electromyography (EMG), and galvanic skin response (GSR). Firstly, handcrafted features from distinct modalities are individually fed to corresponding feature extractors to obtain latent features. Subsequently, low-rank tensor is fused to integrate the information by the modality interaction representation. Finally, a cross-modal attention module is employed to explore the potential relationships between the distinct latent features and modality interaction representation, and recalibrate the weights of different modalities. And the resultant representation is adopted for emotion recognition. Furthermore, to validate the effectiveness of the proposed method, we execute subject-independent experiments within the DEAP dataset. The proposed method has achieved the accuracies of 73.82% and 74.55% for valence and arousal classification. The results of extensive experiments verify the outstanding performance of the proposed method.
Author Wang, Zhe
Tang, Yiheng
Wan, Xin
Wang, Yongxiong
Liu, Benke
Author_xml – sequence: 1
  givenname: Xin
  surname: Wan
  fullname: Wan, Xin
  organization: School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology , Shanghai 200093, People’s Republic of China
– sequence: 2
  givenname: Yongxiong
  orcidid: 0000-0002-3242-0857
  surname: Wang
  fullname: Wang, Yongxiong
  organization: School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology , Shanghai 200093, People’s Republic of China
– sequence: 3
  givenname: Zhe
  surname: Wang
  fullname: Wang, Zhe
  organization: School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology , Shanghai 200093, People’s Republic of China
– sequence: 4
  givenname: Yiheng
  surname: Tang
  fullname: Tang, Yiheng
  organization: School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology , Shanghai 200093, People’s Republic of China
– sequence: 5
  givenname: Benke
  surname: Liu
  fullname: Liu, Benke
  organization: School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology , Shanghai 200093, People’s Republic of China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38917842$$D View this record in MEDLINE/PubMed
BookMark eNp9kUtP3TAQRi0EKhfovqsqSxZN8SOJ42WFKA9dqRtYW449uTVN7NR2hPj3ODfAoqpY-THnG2nOnKBD5x0g9IXg7wS37QVhDSmbmosLZequ0wdo8_51iDZYNLxkjFXH6CTGR4wJaWn9CR2zVhDeVnSD5jtvXSoG_1QG5f4UCVz0oejnaL0rlDOFDj7GcvRGDYVKuZ6WSp-hcR6SXQvT7-ccGPzO6vyKdufUEItORTAFjH4fCaD9ztnlfoaO-gzA59fzFD38vLq_vCm3v65vL39sS80akUqm6iYPJrDmraoaAbUwgIFXHWkAE8CYUUEr0hlsBAABqjveZUQwmiek7BSdr32n4P_OEJMcbdQwDMqBn6NkmFMqCKFtRr--onM3gpFTsKMKz_JNVQbwCuyFBOjfEYLlsg25qJeLerluI0eafyLaJrUISEHZ4aPgtzVo_SQf_RwWnR_h5__BpxGUrGrJJeZ1ViUn07MXateteg
CODEN PMEAE3
CitedBy_id crossref_primary_10_3390_brainsci14121252
crossref_primary_10_2478_amns_2024_2736
Cites_doi 10.3389/fnbot.2019.00037
10.1088/1741-2552/ab0ab5
10.1016/j.compbiomed.2021.105080
10.1016/j.bspc.2022.104140
10.1016/j.inffus.2020.01.011
10.1109/T-AFFC.2011.15
10.1109/ACCESS.2019.2949707
10.18653/v1/D17-1115
10.3390/s20072034
10.1007/s11042-022-12310-7
10.3892/ol.2020.12250
10.1007/s40846-019-00505-7
10.1109/JSEN.2022.3144317
10.1016/j.bspc.2023.104999
10.1109/TIM.2022.3216413
10.1016/j.imavis.2012.06.016
10.1016/j.inffus.2017.02.003
10.1093/bib/bbab569
10.1109/TAFFC.2023.3263907
10.1007/s10586-022-03705-0
10.1109/TCSS.2022.3228649
10.1007/s11042-020-09354-y
10.1016/j.ipm.2019.102185
10.1016/j.cmpb.2016.12.005
10.1109/TCSII.2020.2983389
10.3389/fnins.2022.1000716
10.1007/s00138-021-01249-8
10.1109/TCDS.2021.3071170
10.3390/s18072074
10.3390/s18051383
10.3390/brainsci10100687
10.1109/TAFFC.2023.3250460
10.26599/TST.2022.9010038
10.18653/v1/P18-1209
10.1007/s11227-022-05026-w
10.1109/TCYB.2020.2987575
ContentType Journal Article
Copyright 2024 Institute of Physics and Engineering in Medicine. All rights, including for text and data mining, AI training, and similar technologies, are reserved.
2024 Institute of Physics and Engineering in Medicine.
Copyright_xml – notice: 2024 Institute of Physics and Engineering in Medicine. All rights, including for text and data mining, AI training, and similar technologies, are reserved.
– notice: 2024 Institute of Physics and Engineering in Medicine.
DBID AAYXX
CITATION
NPM
7X8
DOI 10.1088/1361-6579/ad5bbc
DatabaseName CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
CrossRef
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
Engineering
Physics
EISSN 1361-6579
ExternalDocumentID 38917842
10_1088_1361_6579_ad5bbc
pmeaad5bbc
Genre Journal Article
GrantInformation_xml – fundername: Natural Science Foundation of Shanghai
  grantid: 22ZR1443700
GroupedDBID ---
-~X
123
1JI
4.4
53G
5B3
5VS
5ZH
7.M
7.Q
AAGCD
AAJIO
AAJKP
AATNI
ABCXL
ABHWH
ABJNI
ABQJV
ABVAM
ACAFW
ACGFS
ACHIP
ADEQX
AEFHF
AEINN
AENEX
AFYNE
AKPSB
ALMA_UNASSIGNED_HOLDINGS
AOAED
ASPBG
ATQHT
AVWKF
AZFZN
CBCFC
CEBXE
CJUJL
CRLBU
CS3
DU5
EBS
EDWGO
EJD
EMSAF
EPQRW
EQZZN
F5P
IHE
IJHAN
IOP
IZVLO
KOT
LAP
N5L
N9A
P2P
PJBAE
R4D
RIN
RNS
RO9
RPA
SY9
W28
XPP
ZMT
AAYXX
CITATION
HAK
NPM
ROL
UCJ
7X8
ID FETCH-LOGICAL-c369t-3a56ad590c78a469e59de0e74b16e01e00329241bd0d9ee1e2cb7bde093289123
IEDL.DBID IOP
ISSN 0967-3334
1361-6579
IngestDate Fri Jul 11 07:05:23 EDT 2025
Wed Feb 19 02:05:42 EST 2025
Thu Apr 24 22:53:01 EDT 2025
Thu Jul 31 00:08:54 EDT 2025
Tue Jul 29 22:11:43 EDT 2025
Tue Aug 20 22:17:05 EDT 2024
IsPeerReviewed true
IsScholarly true
Issue 7
Keywords Emotion recognition
Deep neural network
Physiological signals
Multimodal fusion
Language English
License This article is available under the terms of the IOP-Standard License.
2024 Institute of Physics and Engineering in Medicine.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c369t-3a56ad590c78a469e59de0e74b16e01e00329241bd0d9ee1e2cb7bde093289123
Notes PMEA-105650.R1
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0002-3242-0857
PMID 38917842
PQID 3072291128
PQPubID 23479
PageCount 14
ParticipantIDs crossref_primary_10_1088_1361_6579_ad5bbc
proquest_miscellaneous_3072291128
pubmed_primary_38917842
crossref_citationtrail_10_1088_1361_6579_ad5bbc
iop_journals_10_1088_1361_6579_ad5bbc
PublicationCentury 2000
PublicationDate 2024-07-01
PublicationDateYYYYMMDD 2024-07-01
PublicationDate_xml – month: 07
  year: 2024
  text: 2024-07-01
  day: 01
PublicationDecade 2020
PublicationPlace England
PublicationPlace_xml – name: England
PublicationTitle Physiological measurement
PublicationTitleAbbrev PM
PublicationTitleAlternate Physiol. Meas
PublicationYear 2024
Publisher IOP Publishing
Publisher_xml – name: IOP Publishing
References Yin (pmeaad5bbcbib37) 2017; 140
Gao (pmeaad5bbcbib8) 2020; 67
Mou (pmeaad5bbcbib25) 2023; 14
Khurana (pmeaad5bbcbib13) 2022; 11
Shu (pmeaad5bbcbib28) 2018; 18
Gao (pmeaad5bbcbib7) 2020; 79
Tang (pmeaad5bbcbib30) 2017
Zhang (pmeaad5bbcbib40) 2020; 59
Tang (pmeaad5bbcbib31) 2023; 85
Zhang (pmeaad5bbcbib41) 2023; 28
Qiu (pmeaad5bbcbib27) 2018
Vaswani (pmeaad5bbcbib32) 2017; vol 30
Yu (pmeaad5bbcbib38) 2020; 20
Wang (pmeaad5bbcbib34) 2022; 71
Wu (pmeaad5bbcbib35) 2023; 15
Xing (pmeaad5bbcbib36) 2019; 13
Stahlschmidt (pmeaad5bbcbib29) 2022; 23
Liu (pmeaad5bbcbib19) 2021; 14
Ayata (pmeaad5bbcbib1) 2020; 40
Cheng (pmeaad5bbcbib3) 2022
Li (pmeaad5bbcbib18) 2022; 140
Fu (pmeaad5bbcbib6) 2022; 16
Gunes (pmeaad5bbcbib9) 2013; 31
Zadeh (pmeaad5bbcbib39) 2017
Koelstra (pmeaad5bbcbib14) 2011; 3
Cimtay (pmeaad5bbcbib4) 2020; 20
Boulahia (pmeaad5bbcbib2) 2021; 32
Poria (pmeaad5bbcbib26) 2017; 37
Wang (pmeaad5bbcbib33) 2022; 22
Zhang (pmeaad5bbcbib42) 2020; 51
Zhou (pmeaad5bbcbib43) 2023; 26
Craik (pmeaad5bbcbib5) 2019; 16
Moin (pmeaad5bbcbib24) 2023; 79
Kwon (pmeaad5bbcbib15) 2018; 18
Li (pmeaad5bbcbib17) 2019; 7
Iyer (pmeaad5bbcbib12) 2023; 82
Mert (pmeaad5bbcbib23) 2023; 79
He (pmeaad5bbcbib10) 2022
Liu (pmeaad5bbcbib20) 2018
He (pmeaad5bbcbib11) 2020; 10
Ma (pmeaad5bbcbib22) 2021
Ma (pmeaad5bbcbib21) 2019
Li (pmeaad5bbcbib16) 2020; 57
References_xml – volume: 13
  start-page: 37
  year: 2019
  ident: pmeaad5bbcbib36
  article-title: SAE+LSTM: a new framework for emotion recognition from multi-channel EEG
  publication-title: Front. Neurorobot.
  doi: 10.3389/fnbot.2019.00037
– volume: 16
  year: 2019
  ident: pmeaad5bbcbib5
  article-title: Deep learning for electroencephalogram (EEG) classification tasks: a review
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2552/ab0ab5
– start-page: pp 61
  year: 2022
  ident: pmeaad5bbcbib10
  article-title: Multimodal temporal attention in sentiment analysis
– volume: 140
  year: 2022
  ident: pmeaad5bbcbib18
  article-title: A novel ensemble learning method using multiple objective particle swarm optimization for subject-independent EEG-based emotion recognition
  publication-title: Comput. Biol. Med.
  doi: 10.1016/j.compbiomed.2021.105080
– volume: 79
  year: 2023
  ident: pmeaad5bbcbib23
  article-title: Modality encoded latent dataset for emotion recognition
  publication-title: Biomed. Signal Process. Control
  doi: 10.1016/j.bspc.2022.104140
– start-page: pp 221
  year: 2018
  ident: pmeaad5bbcbib27
  article-title: Multi-view emotion recognition using deep canonical correlation analysis
– volume: 59
  start-page: 103
  year: 2020
  ident: pmeaad5bbcbib40
  article-title: Emotion recognition using multi-modal data and machine learning techniques: a tutorial and review
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2020.01.011
– volume: 3
  start-page: 18
  year: 2011
  ident: pmeaad5bbcbib14
  article-title: DEAP: a database for emotion analysis; using physiological signals
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/T-AFFC.2011.15
– volume: 7
  start-page: 155724
  year: 2019
  ident: pmeaad5bbcbib17
  article-title: The fusion of electroencephalography and facial expression for continuous emotion recognition
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2949707
– year: 2017
  ident: pmeaad5bbcbib39
  article-title: Tensor fusion network for multimodal sentiment analysis
  doi: 10.18653/v1/D17-1115
– volume: 20
  start-page: 2034
  year: 2020
  ident: pmeaad5bbcbib4
  article-title: Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset EEG emotion recognition
  publication-title: Sensors
  doi: 10.3390/s20072034
– volume: 82
  start-page: 4883
  year: 2023
  ident: pmeaad5bbcbib12
  article-title: CNN and LSTM based ensemble learning for human emotion recognition using EEG recordings
  publication-title: Multimedia Tools Appl.
  doi: 10.1007/s11042-022-12310-7
– start-page: pp 209
  year: 2022
  ident: pmeaad5bbcbib3
  article-title: VigilanceNet: decouple intra-and inter-modality learning for multimodal vigilance estimation in RSVP-based BCI
– start-page: pp 811
  year: 2017
  ident: pmeaad5bbcbib30
  article-title: Multimodal emotion recognition using deep neural networks
– volume: 20
  start-page: 1
  year: 2020
  ident: pmeaad5bbcbib38
  article-title: A model for predicting prognosis in patients with esophageal squamous cell carcinoma based on joint representation learning
  publication-title: Oncol. Lett.
  doi: 10.3892/ol.2020.12250
– volume: 40
  start-page: 149
  year: 2020
  ident: pmeaad5bbcbib1
  article-title: Emotion recognition from multimodal physiological signals for emotion aware healthcare systems
  publication-title: J. Med. Biol. Eng.
  doi: 10.1007/s40846-019-00505-7
– volume: 22
  start-page: 4359
  year: 2022
  ident: pmeaad5bbcbib33
  article-title: Transformers for EEG-based emotion recognition: a hierarchical spatial information learning model
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2022.3144317
– volume: 85
  year: 2023
  ident: pmeaad5bbcbib31
  article-title: STILN: a novel spatial-temporal information learning network for EEG-based emotion recognition
  publication-title: Biomed. Signal Process. Control
  doi: 10.1016/j.bspc.2023.104999
– start-page: pp 176
  year: 2019
  ident: pmeaad5bbcbib21
  article-title: Emotion recognition using multimodal residual lstm network
– volume: 71
  start-page: 1
  year: 2022
  ident: pmeaad5bbcbib34
  article-title: Spatial-temporal feature fusion neural network for EEG-based emotion recognition
  publication-title: IEEE Trans. Instrum. Meas.
  doi: 10.1109/TIM.2022.3216413
– volume: 31
  start-page: 120
  year: 2013
  ident: pmeaad5bbcbib9
  article-title: Categorical and dimensional affect analysis in continuous input: current trends and future directions
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2012.06.016
– volume: 37
  start-page: 98
  year: 2017
  ident: pmeaad5bbcbib26
  article-title: A review of affective computing: from unimodal analysis to multimodal fusion
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2017.02.003
– volume: 23
  start-page: 569
  year: 2022
  ident: pmeaad5bbcbib29
  article-title: Multimodal deep learning for biomedical data fusion: a review
  publication-title: Brief. Bioinform.
  doi: 10.1093/bib/bbab569
– volume: 15
  start-page: 157
  year: 2023
  ident: pmeaad5bbcbib35
  article-title: Transformer-based self-supervised multimodal representation learning for wearable emotion recognition
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/TAFFC.2023.3263907
– volume: 26
  start-page: 1253
  year: 2023
  ident: pmeaad5bbcbib43
  article-title: An AI-empowered affect recognition model for healthcare and emotional well-being using physiological signals
  publication-title: Cluster Comput.
  doi: 10.1007/s10586-022-03705-0
– volume: 11
  start-page: 478
  year: 2022
  ident: pmeaad5bbcbib13
  article-title: RobinNet: a multimodal speech emotion recognition system with speaker recognition for social interactions
  publication-title: IEEE Trans. Comput. Soc. Syst.
  doi: 10.1109/TCSS.2022.3228649
– volume: 79
  start-page: 27057
  year: 2020
  ident: pmeaad5bbcbib7
  article-title: EEG based emotion recognition using fusion feature extraction method
  publication-title: Multimedia Tools Appl.
  doi: 10.1007/s11042-020-09354-y
– volume: 57
  year: 2020
  ident: pmeaad5bbcbib16
  article-title: Exploring temporal representations by leveraging attention-based bidirectional LSTM-RNNs for multi-modal emotion recognition
  publication-title: Inf. Process. Manage.
  doi: 10.1016/j.ipm.2019.102185
– start-page: pp 29
  year: 2021
  ident: pmeaad5bbcbib22
  article-title: Hybrid mutimodal fusion for dimensional emotion recognition
– volume: 140
  start-page: 93
  year: 2017
  ident: pmeaad5bbcbib37
  article-title: Recognition of emotions using multimodal physiological signals and an ensemble deep learning model
  publication-title: Comput. Methods Programs Biomed.
  doi: 10.1016/j.cmpb.2016.12.005
– volume: 67
  start-page: 3447
  year: 2020
  ident: pmeaad5bbcbib8
  article-title: A deep learning method for improving the classification accuracy of SSMVEP-based BCI
  publication-title: IEEE Trans. Circuits Syst. II
  doi: 10.1109/TCSII.2020.2983389
– volume: 16
  year: 2022
  ident: pmeaad5bbcbib6
  article-title: Emotion recognition based on multi-modal physiological signals and transfer learning
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2022.1000716
– volume: 32
  start-page: 121
  year: 2021
  ident: pmeaad5bbcbib2
  article-title: Early, intermediate and late fusion strategies for robust deep learning-based multimodal action recognition
  publication-title: Mach. Vis. Appl.
  doi: 10.1007/s00138-021-01249-8
– volume: 14
  start-page: 715
  year: 2021
  ident: pmeaad5bbcbib19
  article-title: Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition
  publication-title: IEEE Trans. Cogn. Dev. Syst.
  doi: 10.1109/TCDS.2021.3071170
– volume: 18
  start-page: 2074
  year: 2018
  ident: pmeaad5bbcbib28
  article-title: A review of emotion recognition using physiological signals
  publication-title: Sensors
  doi: 10.3390/s18072074
– volume: 18
  start-page: 1383
  year: 2018
  ident: pmeaad5bbcbib15
  article-title: Electroencephalography based fusion two-dimensional (2D)-convolution neural networks (CNN) model for emotion recognition system
  publication-title: Sensors
  doi: 10.3390/s18051383
– volume: 10
  start-page: 687
  year: 2020
  ident: pmeaad5bbcbib11
  article-title: Advances in multimodal emotion recognition based on brain–computer interfaces
  publication-title: Brain Sci.
  doi: 10.3390/brainsci10100687
– volume: 14
  start-page: 2970
  year: 2023
  ident: pmeaad5bbcbib25
  article-title: Driver emotion recognition with a hybrid attentional multimodal fusion framework
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/TAFFC.2023.3250460
– volume: vol 30
  year: 2017
  ident: pmeaad5bbcbib32
  article-title: Attention is all you need
– volume: 28
  start-page: 673
  year: 2023
  ident: pmeaad5bbcbib41
  article-title: Developing a physiological signal-based, mean threshold and decision-level fusion algorithm (PMD) for emotion recognition
  publication-title: Tsinghua Sci. Technol.
  doi: 10.26599/TST.2022.9010038
– year: 2018
  ident: pmeaad5bbcbib20
  article-title: Efficient low-rank multimodal fusion with modality-specific factors
  doi: 10.18653/v1/P18-1209
– volume: 79
  start-page: 9320
  year: 2023
  ident: pmeaad5bbcbib24
  article-title: Emotion recognition framework using multiple modalities for an effective human–computer interaction
  publication-title: J. Supercomput.
  doi: 10.1007/s11227-022-05026-w
– volume: 51
  start-page: 4386
  year: 2020
  ident: pmeaad5bbcbib42
  article-title: Emotion recognition from multimodal physiological signals using a regularized deep fusion of kernel machine
  publication-title: IEEE Trans. Cybern.
  doi: 10.1109/TCYB.2020.2987575
SSID ssj0011825
Score 2.4025338
Snippet Objective . Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies...
Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies predominantly...
Objective. Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies...
SourceID proquest
pubmed
crossref
iop
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 75003
SubjectTerms deep neural network
emotion recognition
multimodal fusion
physiological signals
Title Joint low-rank tensor fusion and cross-modal attention for multimodal physiological signals based emotion recognition
URI https://iopscience.iop.org/article/10.1088/1361-6579/ad5bbc
https://www.ncbi.nlm.nih.gov/pubmed/38917842
https://www.proquest.com/docview/3072291128
Volume 45
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3da9UwFD_sA0UfnLtOvdONCPrgQ-9tk7RN8UnEMQZze3CwByHkqzC2tZfdFsG_3pOkt2xDh_hW6GmapOfjl55fTgDe1xyjsBM8MdbZhKuMo0nRIhHKCo4LClQDv9_5-FtxeMaPzvPzNfg07oVpF4Prn-FlLBQcp3AgxIl5xorMMzaqubK51mYdNpnAwOl3752cjikEBM6Bv1ihJ2CM8SFH-acW7sSkdXzv3-FmCDsHW_Bj1eHINrmc9Z2emV_3ajn-54iew7MBjpLPUXQb1lwzgae3ihRO4PHxkH6fwKPAFzXLF9AftRdNR67an4k_9Z14Hnx7Q-re_3wjqrEkDDW5bi0272t4BlYlQYhMAocx3gj_VVbul3guCVoD8ZHVEhcPGCIjxaltduDs4Ov3L4fJcIJDYlhRdQlTeYFDqlJTCoULcZdX1qWu5DorXJo5dCkUF4CZtqmtnMscNbrUKIKoUqDOsJew0bSNew0E2zG8QLViJeWuzpW2gtYIlyqe6tpWU5ivvqE0Q3lzf8rGlQxpdiGkn2XpZ1nGWZ7Cx_GJRSzt8YDsB_x4crDv5QNy7-7ILa6dkjyXpfTQLGVyYWuUWSmXRFv2CRrVuLZfSvS3lGL0oWIKr6LWjT3z-eRScLr7jz15A08o4q_ILH4LG91N7_YQP3V6P9jJb7GyFPc
linkProvider IOP Publishing
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB7RIio48FgKLE8jwYFDdhPbSZwjAlal0NIDlXozfkVCtMmqmwiJX8_4sSuKoELiFikTx4952fN5BuBFy9EKO8EzY53NuCo4ihStMqGs4LihQDbw950PDqu9Y75_Up6kOqfhLky_TKp_ho8xUXCcwgSIE_OCVYVHbDRzZUutzXxp2y24WjK0nf4G36ejTRgBneeAYWxQGzDGeIpT_qmVC3ZpC__9d5czmJ7FLfiy7nREnHybjYOemR-_5XP8j1HdhpvJLSWvI_kduOK6Cdz4JVnhBHYOUhh-AtcCbtSs7sK433_tBnLaf8989Xfi8fD9OWlHfwhHVGdJGG521lts3ufyDOhKgq4yCVjG-CKcr6zVMPGYEpQK4i2sJS4WGiIbqFPf7cLx4t3nN3tZquSQGVY1Q8ZUWeGwmtzUQuGG3JWNdbmruS4qlxcOVQvFjWChbW4b5wpHja41kqB3KZB32D3Y7vrOPQCC7RheIXuxmnLXlkpbQVtc-obnurXNFObrdZQmpTn31TZOZQi3CyH9TEs_0zLO9BRebb5YxhQfl9C-xAWUSc5Xl9A9v0C3PHNK8lLW0rtoOZO4ukizZjCJMu0DNapz_biSqHcpRStExRTuR87b9MzHlWvB6cN_7Mkz2Dl6u5Af3x9-eATXKbpkEWz8GLaH89E9QZdq0E-D2PwEpjsaWw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Joint+low-rank+tensor+fusion+and+cross-modal+attention+for+multimodal+physiological+signals+based+emotion+recognition&rft.jtitle=Physiological+measurement&rft.au=Wan%2C+Xin&rft.au=Wang%2C+Yongxiong&rft.au=Wang%2C+Zhe&rft.au=Tang%2C+Yiheng&rft.date=2024-07-01&rft.pub=IOP+Publishing&rft.issn=0967-3334&rft.eissn=1361-6579&rft.volume=45&rft.issue=7&rft_id=info:doi/10.1088%2F1361-6579%2Fad5bbc&rft.externalDocID=pmeaad5bbc
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0967-3334&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0967-3334&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0967-3334&client=summon