Emotion Recognition From Multimodal Physiological Signals Using a Regularized Deep Fusion of Kernel Machine

These days, physiological signals have been studied more broadly for emotion recognition to realize emotional intelligence in human-computer interaction. However, due to the complexity of emotions and individual differences in physiological responses, how to design reliable and effective models has...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on cybernetics Vol. 51; no. 9; pp. 4386 - 4399
Main Authors Zhang, Xiaowei, Liu, Jinyong, Shen, Jian, Li, Shaojie, Hou, Kechen, Hu, Bin, Gao, Jin, Zhang, Tong
Format Journal Article
LanguageEnglish
Published United States IEEE 01.09.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract These days, physiological signals have been studied more broadly for emotion recognition to realize emotional intelligence in human-computer interaction. However, due to the complexity of emotions and individual differences in physiological responses, how to design reliable and effective models has become an important issue. In this article, we propose a regularized deep fusion framework for emotion recognition based on multimodal physiological signals. After extracting the effective features from different types of physiological signals, we construct ensemble dense embeddings of multimodal features using kernel matrices, and then utilize a deep network architecture to learn task-specific representations for each kind of physiological signal from these ensemble dense embeddings. Finally, a global fusion layer with a regularization term, which can efficiently explore the correlation and diversity among all of the representations in a synchronous optimization process, is designed to fuse generated representations. Experiments on two benchmark datasets show that this framework can improve the performance of subject-independent emotion recognition compared to single-modal classifiers or other fusion methods. Data visualization also demonstrates that the final fusion representation exhibits higher class-separability power for emotion recognition.
AbstractList These days, physiological signals have been studied more broadly for emotion recognition to realize emotional intelligence in human-computer interaction. However, due to the complexity of emotions and individual differences in physiological responses, how to design reliable and effective models has become an important issue. In this article, we propose a regularized deep fusion framework for emotion recognition based on multimodal physiological signals. After extracting the effective features from different types of physiological signals, we construct ensemble dense embeddings of multimodal features using kernel matrices, and then utilize a deep network architecture to learn task-specific representations for each kind of physiological signal from these ensemble dense embeddings. Finally, a global fusion layer with a regularization term, which can efficiently explore the correlation and diversity among all of the representations in a synchronous optimization process, is designed to fuse generated representations. Experiments on two benchmark datasets show that this framework can improve the performance of subject-independent emotion recognition compared to single-modal classifiers or other fusion methods. Data visualization also demonstrates that the final fusion representation exhibits higher class-separability power for emotion recognition.
These days, physiological signals have been studied more broadly for emotion recognition to realize emotional intelligence in human-computer interaction. However, due to the complexity of emotions and individual differences in physiological responses, how to design reliable and effective models has become an important issue. In this article, we propose a regularized deep fusion framework for emotion recognition based on multimodal physiological signals. After extracting the effective features from different types of physiological signals, we construct ensemble dense embeddings of multimodal features using kernel matrices, and then utilize a deep network architecture to learn task-specific representations for each kind of physiological signal from these ensemble dense embeddings. Finally, a global fusion layer with a regularization term, which can efficiently explore the correlation and diversity among all of the representations in a synchronous optimization process, is designed to fuse generated representations. Experiments on two benchmark datasets show that this framework can improve the performance of subject-independent emotion recognition compared to single-modal classifiers or other fusion methods. Data visualization also demonstrates that the final fusion representation exhibits higher class-separability power for emotion recognition.These days, physiological signals have been studied more broadly for emotion recognition to realize emotional intelligence in human-computer interaction. However, due to the complexity of emotions and individual differences in physiological responses, how to design reliable and effective models has become an important issue. In this article, we propose a regularized deep fusion framework for emotion recognition based on multimodal physiological signals. After extracting the effective features from different types of physiological signals, we construct ensemble dense embeddings of multimodal features using kernel matrices, and then utilize a deep network architecture to learn task-specific representations for each kind of physiological signal from these ensemble dense embeddings. Finally, a global fusion layer with a regularization term, which can efficiently explore the correlation and diversity among all of the representations in a synchronous optimization process, is designed to fuse generated representations. Experiments on two benchmark datasets show that this framework can improve the performance of subject-independent emotion recognition compared to single-modal classifiers or other fusion methods. Data visualization also demonstrates that the final fusion representation exhibits higher class-separability power for emotion recognition.
Author Shen, Jian
Gao, Jin
Zhang, Xiaowei
Hu, Bin
Liu, Jinyong
Li, Shaojie
Zhang, Tong
Hou, Kechen
Author_xml – sequence: 1
  givenname: Xiaowei
  orcidid: 0000-0001-8562-416X
  surname: Zhang
  fullname: Zhang, Xiaowei
  email: zhangxw@lzu.edu.cn
  organization: Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China
– sequence: 2
  givenname: Jinyong
  orcidid: 0000-0003-0051-5779
  surname: Liu
  fullname: Liu, Jinyong
  email: liujy2016@lzu.edu.cn
  organization: Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China
– sequence: 3
  givenname: Jian
  orcidid: 0000-0001-6099-3209
  surname: Shen
  fullname: Shen, Jian
  email: shenj17@lzu.edu.cn
  organization: Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China
– sequence: 4
  givenname: Shaojie
  orcidid: 0000-0003-2432-0482
  surname: Li
  fullname: Li, Shaojie
  email: lishj2019@lzu.edu.cn
  organization: Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China
– sequence: 5
  givenname: Kechen
  surname: Hou
  fullname: Hou, Kechen
  email: houkch16@lzu.edu.cn
  organization: Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China
– sequence: 6
  givenname: Bin
  orcidid: 0000-0002-1324-3285
  surname: Hu
  fullname: Hu, Bin
  email: hub17@lzu.edu.cn
  organization: Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China
– sequence: 7
  givenname: Jin
  surname: Gao
  fullname: Gao, Jin
  email: gaoj2018@lzu.edu.cn
  organization: Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China
– sequence: 8
  givenname: Tong
  orcidid: 0000-0002-7025-6365
  surname: Zhang
  fullname: Zhang, Tong
  email: tony@scut.edu.cn
  organization: School of Electronics and Information, South China University of Technology, Guangzhou, China
– sequence: 9
  givenname: Bin
  orcidid: 0000-0003-3514-5413
  surname: Hu
  fullname: Hu, Bin
  email: bh@lzu.edu.cn
  organization: Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/32413939$$D View this record in MEDLINE/PubMed
BookMark eNp9kU9v1DAQxS1URP_QD4CQUCQuXHaxx3EcH-nSBUQrELQHTlbWmaQujr3YyaH99DjstocesA-esX7vSTPvmBz44JGQV4wuGaPq_dXq19kSKNAlqFoKKZ6RI2BVvQCQ4uCxruQhOU3pluZT5y9VvyCHHErGFVdH5Pf5EEYbfPEDTei9_VevYxiKy8mNdght44rvN3fJBhd6a3L30_a-cam4Ttb3RZOV_eSaaO-xLT4ibov1lGaX0BVfMXp0xWVjbqzHl-R5l4V4un9PyPX6_Gr1eXHx7dOX1YeLheGlGhe8BNaxlrO2AkpRSoqwkabiqEqsoOIbVtONAiZAGWFEK6q2M0xJBMSSKn5C3u18tzH8mTCNerDJoHONxzAlDSXNtxaKZ_TtE_Q2THEeT4OQUFUlgzpTb_bUtBmw1dtohybe6Yc1ZkDuABNDShE7bezYzLscY2OdZlTPmek5Mz1npveZZSV7onww_5_m9U5jEfGRV3lyBsD_AvL-n_I
CODEN ITCEB8
CitedBy_id crossref_primary_10_1088_1361_6579_ad5bbc
crossref_primary_10_1109_JSTARS_2021_3123087
crossref_primary_10_1088_2057_1976_ad31f9
crossref_primary_10_1631_FITEE_2100489
crossref_primary_10_1186_s40708_022_00162_8
crossref_primary_10_1109_TAFFC_2023_3318321
crossref_primary_10_1109_LSENS_2025_3526907
crossref_primary_10_1007_s11042_023_17489_x
crossref_primary_10_1080_10255842_2025_2456996
crossref_primary_10_3389_fnins_2022_911767
crossref_primary_10_1016_j_patrec_2024_12_006
crossref_primary_10_1109_JBHI_2023_3265805
crossref_primary_10_1109_TCSS_2024_3420445
crossref_primary_10_1109_TAFFC_2023_3290177
crossref_primary_10_1109_TAFFC_2024_3409357
crossref_primary_10_1109_TCYB_2023_3320107
crossref_primary_10_1007_s13755_023_00226_x
crossref_primary_10_1007_s40436_024_00519_8
crossref_primary_10_1109_TAFFC_2022_3158843
crossref_primary_10_1016_j_knosys_2023_110756
crossref_primary_10_1109_TAFFC_2023_3263907
crossref_primary_10_1109_JBHI_2024_3504604
crossref_primary_10_1016_j_engappai_2025_110004
crossref_primary_10_1109_TCSS_2022_3152091
crossref_primary_10_1109_JBHI_2024_3416944
crossref_primary_10_1109_TAFFC_2024_3414330
crossref_primary_10_3390_diagnostics13122097
crossref_primary_10_3390_e23101298
crossref_primary_10_1016_j_inffus_2025_103058
crossref_primary_10_1109_TIM_2024_3375980
crossref_primary_10_3389_fcomp_2023_1264713
crossref_primary_10_1109_TCSS_2022_3157522
crossref_primary_10_1108_ACI_03_2022_0080
crossref_primary_10_1109_TNNLS_2023_3236635
crossref_primary_10_1109_TCDS_2023_3270170
crossref_primary_10_1145_3666002
crossref_primary_10_1109_TNSRE_2022_3221962
crossref_primary_10_3389_fnins_2022_965871
crossref_primary_10_1109_TIM_2024_3369130
crossref_primary_10_1109_JSEN_2021_3121293
crossref_primary_10_1109_TCE_2024_3370310
crossref_primary_10_1016_j_bspc_2024_106812
crossref_primary_10_1109_TCSS_2024_3412074
crossref_primary_10_1109_TCYB_2022_3197127
crossref_primary_10_3390_app12104998
crossref_primary_10_1109_TCSS_2023_3303331
crossref_primary_10_1109_TCSS_2024_3406988
crossref_primary_10_1016_j_bspc_2021_102960
crossref_primary_10_1016_j_bspc_2023_104989
crossref_primary_10_1007_s11571_024_10123_y
crossref_primary_10_3390_s23187853
crossref_primary_10_3390_bioengineering9110688
crossref_primary_10_1109_LRA_2021_3067867
crossref_primary_10_1109_TAFFC_2022_3169001
crossref_primary_10_7717_peerj_cs_1472
crossref_primary_10_1016_j_jneumeth_2023_109909
crossref_primary_10_1109_TCYB_2021_3085489
crossref_primary_10_3390_s23063250
crossref_primary_10_1007_s11042_024_19171_2
crossref_primary_10_1016_j_eswa_2023_120948
crossref_primary_10_1016_j_inffus_2023_101847
crossref_primary_10_1002_widm_1563
crossref_primary_10_1109_TCSS_2023_3314508
crossref_primary_10_1016_j_patrec_2022_11_001
crossref_primary_10_1109_ACCESS_2024_3375393
crossref_primary_10_1016_j_metrad_2025_100135
crossref_primary_10_1016_j_eswa_2023_122454
crossref_primary_10_1109_TAFFC_2022_3210958
crossref_primary_10_3390_diagnostics13050977
crossref_primary_10_1038_s41598_024_62990_4
crossref_primary_10_1109_TAI_2024_3445325
crossref_primary_10_1109_ACCESS_2022_3224725
crossref_primary_10_1038_s41598_024_72507_8
crossref_primary_10_3389_fnins_2022_1000716
crossref_primary_10_1016_j_bspc_2024_107324
crossref_primary_10_1109_ACCESS_2024_3350745
crossref_primary_10_2478_amns_2023_2_00533
crossref_primary_10_7717_peerj_cs_1977
crossref_primary_10_1109_ACCESS_2024_3506157
crossref_primary_10_1109_TIM_2023_3338676
crossref_primary_10_1007_s11063_023_11250_z
crossref_primary_10_1016_j_inffus_2023_102129
crossref_primary_10_1007_s11227_022_04665_3
crossref_primary_10_1016_j_eswa_2023_121692
crossref_primary_10_1109_TAFFC_2021_3137857
crossref_primary_10_1016_j_patcog_2023_109794
crossref_primary_10_1109_TAFFC_2024_3385651
crossref_primary_10_1109_TAFFC_2022_3158234
crossref_primary_10_1016_j_knosys_2022_109038
crossref_primary_10_1002_aisy_202300359
crossref_primary_10_1007_s11042_023_18021_x
crossref_primary_10_1109_TCYB_2022_3204343
crossref_primary_10_1016_j_compbiomed_2023_106860
crossref_primary_10_3390_app13042573
crossref_primary_10_1109_TAFFC_2022_3171782
crossref_primary_10_1016_j_buildenv_2024_111396
crossref_primary_10_1109_TCYB_2021_3090811
Cites_doi 10.1109/TPAMI.2017.2670560
10.1016/S0028-3932(99)00017-2
10.1109/TSMCA.2012.2220542
10.1109/CVPRW.2009.5204298
10.1201/9780203749289
10.1109/TAFFC.2019.2954118
10.1109/TCYB.2018.2797176
10.1090/S0002-9947-1950-0051437-7
10.1109/TMM.2012.2188783
10.1109/TAFFC.2017.2781732
10.1109/TAFFC.2015.2392932
10.1109/CVPR.2014.223
10.1109/TSMCA.2012.2216869
10.1007/s12369-019-00524-z
10.2307/2279372
10.1109/CVPR.2010.5540120
10.1109/TAFFC.2017.2695999
10.1109/TAFFC.2014.2339834
10.3389/fict.2017.00001
10.1504/IJDMB.2017.086097
10.1109/ICASSP.2015.7178347
10.1109/TPAMI.2017.2772235
10.1109/TAFFC.2019.2934412
10.1016/j.neucom.2014.11.078
10.1109/T-AFFC.2011.15
10.1109/ICASSP.2017.7952681
10.1109/CGIV.2007.33
10.1016/j.neucom.2014.02.057
10.1109/TNNLS.2018.2804895
10.1109/TAFFC.2017.2712143
10.1109/TAFFC.2018.2840973
10.1109/TCYB.2019.2904052
10.1109/ICBBE.2008.670
10.1007/s12193-015-0195-2
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
7TB
8FD
F28
FR3
H8D
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TCYB.2020.2987575
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Mechanical & Transportation Engineering Abstracts
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Aerospace Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Engineering Research Database
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
PubMed
Aerospace Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Sciences (General)
EISSN 2168-2275
EndPage 4399
ExternalDocumentID 32413939
10_1109_TCYB_2020_2987575
9093122
Genre orig-research
Journal Article
GrantInformation_xml – fundername: National Basic Research Program of China (973 Program)
  grantid: 2014CB744600
  funderid: 10.13039/501100012166
– fundername: Program of Beijing Municipal Science and Technology Commission
  grantid: Z171100000117005
  funderid: 10.13039/501100009592
– fundername: National Key Research and Development Program of China
  grantid: 2019YFA0706200
  funderid: 10.13039/501100012166
– fundername: National Natural Science Foundation of China
  grantid: 61632014; 61402211
  funderid: 10.13039/501100001809
GroupedDBID 0R~
4.4
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
AENEX
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
HZ~
IFIPE
IPLJI
JAVBF
M43
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
RIG
NPM
7SC
7SP
7TB
8FD
F28
FR3
H8D
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c349t-3421f1d31d6200e770e2b7c63e94e6263b180b921529c5c5d56dfc197e2ee4093
IEDL.DBID RIE
ISSN 2168-2267
2168-2275
IngestDate Fri Jul 11 14:20:47 EDT 2025
Mon Jun 30 04:53:10 EDT 2025
Thu Jan 02 22:58:48 EST 2025
Tue Aug 05 12:01:36 EDT 2025
Thu Apr 24 22:59:21 EDT 2025
Wed Aug 27 02:03:47 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 9
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c349t-3421f1d31d6200e770e2b7c63e94e6263b180b921529c5c5d56dfc197e2ee4093
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-1324-3285
0000-0003-3514-5413
0000-0002-7025-6365
0000-0003-2432-0482
0000-0001-8562-416X
0000-0003-0051-5779
0000-0001-6099-3209
PMID 32413939
PQID 2572664128
PQPubID 85422
PageCount 14
ParticipantIDs crossref_primary_10_1109_TCYB_2020_2987575
proquest_journals_2572664128
crossref_citationtrail_10_1109_TCYB_2020_2987575
ieee_primary_9093122
proquest_miscellaneous_2404048593
pubmed_primary_32413939
PublicationCentury 2000
PublicationDate 2021-09-01
PublicationDateYYYYMMDD 2021-09-01
PublicationDate_xml – month: 09
  year: 2021
  text: 2021-09-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE transactions on cybernetics
PublicationTitleAbbrev TCYB
PublicationTitleAlternate IEEE Trans Cybern
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref35
ref13
ref34
ref12
ref37
ref15
ref14
ref31
kumar (ref28) 2009
ref33
ref32
ref10
kumar (ref27) 2012; 13
ref2
levy (ref26) 2014
ref17
ref38
ref16
ref19
ref18
qi (ref3) 2007
zhang (ref30) 2017
ptaszynski (ref1) 2009
rakotomamonjy (ref36) 2008; 9
ref45
ref23
ref25
ref20
ref42
ref41
ref22
meyer (ref24) 2002
ref44
ref21
ref43
ref29
ref7
demšar (ref39) 2006; 7
ref9
ref4
ref6
ref5
harper (ref11) 2019
ref40
pan (ref8) 2016
References_xml – ident: ref31
  doi: 10.1109/TPAMI.2017.2670560
– ident: ref6
  doi: 10.1016/S0028-3932(99)00017-2
– volume: 9
  start-page: 2491
  year: 2008
  ident: ref36
  article-title: SimpleMKL
  publication-title: J Mach Learn Res
– ident: ref5
  doi: 10.1109/TSMCA.2012.2220542
– start-page: 2914
  year: 2017
  ident: ref30
  article-title: Learning sparse task relations in multi-task learning
  publication-title: Proc 31st AAAI Conf Artif Intell
– start-page: 1060
  year: 2009
  ident: ref28
  article-title: Ensemble Nyström method
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 2177
  year: 2014
  ident: ref26
  article-title: Neural word embedding as implicit matrix factorization
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref23
  doi: 10.1109/CVPRW.2009.5204298
– ident: ref45
  doi: 10.1201/9780203749289
– ident: ref41
  doi: 10.1109/TAFFC.2019.2954118
– ident: ref18
  doi: 10.1109/TCYB.2018.2797176
– ident: ref15
  doi: 10.1090/S0002-9947-1950-0051437-7
– ident: ref17
  doi: 10.1109/TMM.2012.2188783
– ident: ref10
  doi: 10.1109/TAFFC.2017.2781732
– ident: ref14
  doi: 10.1109/TAFFC.2015.2392932
– ident: ref20
  doi: 10.1109/CVPR.2014.223
– ident: ref35
  doi: 10.1109/TSMCA.2012.2216869
– ident: ref4
  doi: 10.1007/s12369-019-00524-z
– ident: ref38
  doi: 10.2307/2279372
– ident: ref16
  doi: 10.1109/CVPR.2010.5540120
– ident: ref7
  doi: 10.1109/TAFFC.2017.2695999
– start-page: 305
  year: 2002
  ident: ref24
  article-title: Continuous audio-visual digit recognition using decision fusion
  publication-title: Proc IEEE Int Conf Acoust Speech Signal Process (CASSP)
– start-page: 2063
  year: 2016
  ident: ref8
  article-title: An EEG-based brain-computer interface for emotion recognition
  publication-title: Proc Int Joint Conf Neural Netw
– volume: 7
  start-page: 1
  year: 2006
  ident: ref39
  article-title: Statistical comparisons of classifiers over multiple data sets
  publication-title: J Mach Learn Res
– ident: ref44
  doi: 10.1109/TAFFC.2014.2339834
– ident: ref34
  doi: 10.3389/fict.2017.00001
– ident: ref12
  doi: 10.1504/IJDMB.2017.086097
– volume: 13
  start-page: 981
  year: 2012
  ident: ref27
  article-title: Sampling methods for the Nyström method
  publication-title: J Mach Learn Res
– ident: ref29
  doi: 10.1109/ICASSP.2015.7178347
– ident: ref40
  doi: 10.1109/TPAMI.2017.2772235
– year: 2019
  ident: ref11
  publication-title: A bayesian deep learning framework for end-to-end prediction of emotion from heartbeat
– ident: ref2
  doi: 10.1109/TAFFC.2019.2934412
– ident: ref37
  doi: 10.1016/j.neucom.2014.11.078
– ident: ref13
  doi: 10.1109/T-AFFC.2011.15
– ident: ref19
  doi: 10.1109/ICASSP.2017.7952681
– ident: ref22
  doi: 10.1109/CGIV.2007.33
– start-page: 1469
  year: 2009
  ident: ref1
  article-title: Towards context aware emotional intelligence in machines: Computing contextual appropriateness of affective states
  publication-title: Proc Int Joint Conf Artif Intell
– ident: ref42
  doi: 10.1016/j.neucom.2014.02.057
– ident: ref25
  doi: 10.1109/TNNLS.2018.2804895
– ident: ref32
  doi: 10.1109/TAFFC.2017.2712143
– ident: ref33
  doi: 10.1109/TAFFC.2018.2840973
– ident: ref43
  doi: 10.1109/TCYB.2019.2904052
– ident: ref9
  doi: 10.1109/ICBBE.2008.670
– ident: ref21
  doi: 10.1007/s12193-015-0195-2
– start-page: 483
  year: 2007
  ident: ref3
  article-title: Facial and speech recognition emotion in distance education system
  publication-title: The Int Conf on Intell Pervasive Comput
SSID ssj0000816898
Score 2.5842524
Snippet These days, physiological signals have been studied more broadly for emotion recognition to realize emotional intelligence in human-computer interaction....
These days, physiological signals have been studied more broadly for emotion recognition to realize emotional intelligence in human–computer interaction....
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 4386
SubjectTerms Brain modeling
Computer architecture
Deep neural network
Emotion recognition
Emotions
Feature extraction
Fuses
Kernel
kernel machine
Kernels
multimodal fusion
Optimization
Physiological responses
Physiology
Regularization
Representations
Scientific visualization
Task analysis
Title Emotion Recognition From Multimodal Physiological Signals Using a Regularized Deep Fusion of Kernel Machine
URI https://ieeexplore.ieee.org/document/9093122
https://www.ncbi.nlm.nih.gov/pubmed/32413939
https://www.proquest.com/docview/2572664128
https://www.proquest.com/docview/2404048593
Volume 51
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELZKT1yAtjwCBRmJAyCyjV_J-ghtVxVoOUArlVOU2JOqaptU282lv54Zx4kQKhU3S7GdRDOet-dj7J1wThnUc6ml8L2WlUvnqMZTnVdKGE9Ck0IDy-_50Yn-empON9in6S4MAITiM5jRMOTyfed6CpXtWXS_hUSB-wAdt-Gu1hRPCQASAfpW4iBFq6KISUyR2b3j_V9f0BmU2UxaauFOgDWKUkqWUML_0EgBYuXf1mbQOovHbDl-71BscjHr1_XM3f7VyvF_f-gJexTNT_554JcttgHtNtuKB_yGv49dqD_ssIvDAeCH_xhLjHC8WHVXPNzZveo87hPqR0fxyX-en1E3Zh7KEHiFKwnnfnV-C54fAFzzRU-xOd41_BusWrjky1DKCU_ZyeLweP8ojcgMqVParlOlpWiEV8LneMqgKDKQdeFyBVYD9bepxTyrLWHmWmec8Sb3jRO2AAmAHqV6xjbbroUXjJu6lmYunG8yr5EaVeVEo0TVNKYQts4Tlo3UKV1sW07oGZdlcF8yWxJtS6JtGWmbsI_TkuuhZ8d9k3eILtPESJKE7Y4sUMZTfVOieEN7RqNKT9jb6TGeR0qyVC10Pc7RKBY1dZFL2POBdaa9R457efc7X7GHkipmQgXbLttcr3p4jSbPun4TeP03Xlr35A
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELaqcoALtJRHaAEjcQBEtrFjJ-sjlK4W2u0BtlI5RYk9QVXbpNpuLv31zDhOhBAgbpZiO4nm6ZnxfIy9FtamGu1cbCh8r2Rp4yma8VhlZSq0I6VJoYHFSTY_VV_O9NkGez_ehQEAX3wGExr6XL5rbUehsn2Dx28hUeHeQbuvRX9ba4yoeAgJD34rcRCjX5GHNKZIzP7y4PtHPA7KZCINNXEnyJqUkkqGcMJ_sUkeZOXv_qa3O7MHbDF8cV9ucjHp1tXE3v7WzPF_f2mL3Q8OKP_Qc8w224DmIdsOIn7D34Q-1G932MVhD_HDvw5FRjierdor7m_tXrUO9_EVpIMC5d_Of1A_Zu4LEXiJKwnpfnV-C45_Arjms46ic7yt-RGsGrjkC1_MCY_Y6exweTCPAzZDbFNl1nGqpKiFS4XLUM4gzxOQVW6zFIwC6nBTiWlSGULNNVZb7XTmaitMDhIAz5TpY7bZtA08ZVxXldRTYV2dOIXUKEsr6lSUda1zYaosYslAncKGxuWEn3FZ-ANMYgqibUG0LQJtI_ZuXHLdd-341-Qdoss4MZAkYnsDCxRBrm8KVHDo0Sg06hF7NT5GiaQ0S9lA2-EchYpRUR-5iD3pWWfce-C4Z39-50t2d75cHBfHn0-Odtk9SfUzvp5tj22uVx08RwdoXb3wfP8Tck37LQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Emotion+Recognition+From+Multimodal+Physiological+Signals+Using+a+Regularized+Deep+Fusion+of+Kernel+Machine&rft.jtitle=IEEE+transactions+on+cybernetics&rft.au=Zhang%2C+Xiaowei&rft.au=Liu%2C+Jinyong&rft.au=Shen%2C+Jian&rft.au=Li%2C+Shaojie&rft.date=2021-09-01&rft.issn=2168-2267&rft.eissn=2168-2275&rft.volume=51&rft.issue=9&rft.spage=4386&rft.epage=4399&rft_id=info:doi/10.1109%2FTCYB.2020.2987575&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCYB_2020_2987575
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2168-2267&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2168-2267&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2168-2267&client=summon