Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals

In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumv...

Full description

Saved in:
Bibliographic Details
Published inEntropy (Basel, Switzerland) Vol. 24; no. 5; p. 577
Main Authors Luo, Junhai, Tian, Yuxin, Yu, Hang, Chen, Yu, Wu, Man
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 20.04.2022
MDPI
Subjects
Online AccessGet full text
ISSN1099-4300
1099-4300
DOI10.3390/e24050577

Cover

Loading…
Abstract In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method’s practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features.
AbstractList In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method’s practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features.
In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one's emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method's practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features.In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one's emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method's practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features.
Author Luo, Junhai
Tian, Yuxin
Wu, Man
Chen, Yu
Yu, Hang
AuthorAffiliation School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China; 202021011418@std.uestc.edu.cn (Y.T.); 202121011427@std.uestc.edu.cn (H.Y.); 202022011429@std.uestc.edu.cn (Y.C.); 201821011420@std.uestc.edu.cn (M.W.)
AuthorAffiliation_xml – name: School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China; 202021011418@std.uestc.edu.cn (Y.T.); 202121011427@std.uestc.edu.cn (H.Y.); 202022011429@std.uestc.edu.cn (Y.C.); 201821011420@std.uestc.edu.cn (M.W.)
Author_xml – sequence: 1
  givenname: Junhai
  orcidid: 0000-0002-8435-007X
  surname: Luo
  fullname: Luo, Junhai
– sequence: 2
  givenname: Yuxin
  surname: Tian
  fullname: Tian, Yuxin
– sequence: 3
  givenname: Hang
  surname: Yu
  fullname: Yu, Hang
– sequence: 4
  givenname: Yu
  surname: Chen
  fullname: Chen, Yu
– sequence: 5
  givenname: Man
  surname: Wu
  fullname: Wu, Man
BackLink https://www.ncbi.nlm.nih.gov/pubmed/35626462$$D View this record in MEDLINE/PubMed
BookMark eNptks9u3CAQxq0qVfOnPfQFKku9tAc3gDE2l0rbbZJGStSq25wRhrGXrRe2gCPlUfq2wbvpKokqDjMMPz4G-I6zA-ssZNlbjD6VJUenQCiqUFXXL7IjjDgvaInQwaP8MDsOYYUQKQlmr7LDsmKEUUaOsr8LWJtiMW7A35oAOp97F0IqtCtQMT9bu2iczX-Ccr012_yLnLiULKJUv1P6Fawzwdg-n43RgVVOg89nXi1NTCKjh_xmuyzz8zFMEq7Lr8chmuLaaTnkP5Z3qTy43qg0W5jeyiG8zl52KcCbh3iS3Zyf_Zp_K66-X1zOZ1eFoozHomWYYNxKqplEJTQcKYaIhJZJCpTRljatrjVTFdRlBxwRyrVUTcLLFjesPMkud7rayZXYeLOW_k44acS24HwvpI9GDSAIBS5VzSiiKo2uTYcRSqquZJ3qtE5an3dam7Fdg1Zgo5fDE9GnK9YsRe9uBccUU8qTwIcHAe_-jBCiWJugYBikBTcGQViNCeM1ntD3z9CVG_30chOFMK0aWifq3eOO9q38c0ACPu4ANX28h26PYCQmd4m9uxJ7-oxVJsrJFOkyZvjPjnvbzNNC
CitedBy_id crossref_primary_10_1007_s13042_024_02158_8
crossref_primary_10_1109_TIM_2024_3369130
crossref_primary_10_1109_JPROC_2023_3286445
crossref_primary_10_3390_e24091187
crossref_primary_10_1088_1402_4896_ad5237
crossref_primary_10_1007_s12652_023_04674_x
crossref_primary_10_1007_s11571_024_10193_y
crossref_primary_10_1016_j_jneumeth_2024_110129
crossref_primary_10_3390_e24121830
crossref_primary_10_1016_j_inffus_2024_102536
crossref_primary_10_1109_JBHI_2022_3225330
crossref_primary_10_1088_2631_8695_adb00d
crossref_primary_10_3390_s22239102
Cites_doi 10.1109/PROC.1987.13824
10.1038/nbt0308-303
10.1016/0167-2789(88)90081-4
10.1109/ACCESS.2019.2922047
10.1109/34.954607
10.1126/science.1127647
10.1145/2001269.2001295
10.1098/rspa.1998.0193
10.1162/neco.2006.18.7.1527
10.1037/0022-3514.53.4.712
10.1007/978-3-540-45012-2_2
10.1109/EMBC.2016.7590834
10.1080/02699930126048
10.1109/T-AFFC.2011.15
10.21437/Eurospeech.2001-627
10.7551/mitpress/7503.003.0024
10.1109/TAFFC.2017.2768030
10.1109/T-AFFC.2011.25
10.1609/aaai.v31i2.19105
10.1007/978-3-642-24571-8_58
10.1109/TPAMI.2013.50
10.1016/0013-4694(70)90143-4
ContentType Journal Article
Copyright 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
2022 by the authors. 2022
Copyright_xml – notice: 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: 2022 by the authors. 2022
DBID AAYXX
CITATION
NPM
7TB
8FD
8FE
8FG
ABJCF
ABUWG
AFKRA
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
FR3
HCIFZ
KR7
L6V
M7S
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PTHSS
7X8
5PM
DOA
DOI 10.3390/e24050577
DatabaseName CrossRef
PubMed
Mechanical & Transportation Engineering Abstracts
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
Materials Science & Engineering Collection
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
Technology Collection
ProQuest One Community College
ProQuest Central Korea
Engineering Research Database
SciTech Premium Collection
Civil Engineering Abstracts
ProQuest Engineering Collection
Engineering Database
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
Engineering Collection
MEDLINE - Academic
PubMed Central (Full Participant titles)
Directory of Open Access Journals - May need to register for free articles
DatabaseTitle CrossRef
PubMed
Publicly Available Content Database
Technology Collection
Technology Research Database
ProQuest One Academic Middle East (New)
Mechanical & Transportation Engineering Abstracts
ProQuest Central Essentials
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest Engineering Collection
ProQuest Central Korea
ProQuest Central (New)
Engineering Collection
Civil Engineering Abstracts
Engineering Database
ProQuest One Academic Eastern Edition
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest One Academic UKI Edition
Materials Science & Engineering Collection
Engineering Research Database
ProQuest One Academic
ProQuest One Academic (New)
MEDLINE - Academic
DatabaseTitleList Publicly Available Content Database
CrossRef

MEDLINE - Academic

PubMed
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1099-4300
ExternalDocumentID oai_doaj_org_article_24e9ac76404c4c4fb90c2425f36fcfdd
PMC9141449
35626462
10_3390_e24050577
Genre Journal Article
GrantInformation_xml – fundername: Science & Technology Department of Sichuan Province
  grantid: 2020JDRC0007
GroupedDBID 29G
2WC
5GY
5VS
8FE
8FG
AADQD
AAFWJ
AAYXX
ABDBF
ABJCF
ACIWK
ACUHS
ADBBV
AEGXH
AENEX
AFKRA
AFPKN
AFZYC
ALMA_UNASSIGNED_HOLDINGS
BCNDV
BENPR
BGLVJ
CCPQU
CITATION
CS3
DU5
E3Z
ESX
F5P
GROUPED_DOAJ
GX1
HCIFZ
HH5
IAO
ITC
J9A
KQ8
L6V
M7S
MODMG
M~E
OK1
OVT
PGMZT
PHGZM
PHGZT
PIMPY
PROAC
PTHSS
RNS
RPM
TR2
TUS
XSB
~8M
NPM
PQGLB
7TB
8FD
ABUWG
AZQEC
DWQXO
FR3
KR7
PKEHL
PQEST
PQQKQ
PQUKI
PRINS
7X8
5PM
PUEGO
ID FETCH-LOGICAL-c469t-b61211ba4d6a03e890c602aeb6a4e464b48bd7d6c5e73fe90249dac8d6a3b1863
IEDL.DBID DOA
ISSN 1099-4300
IngestDate Wed Aug 27 01:15:18 EDT 2025
Thu Aug 21 14:14:07 EDT 2025
Fri Jul 11 05:35:43 EDT 2025
Fri Jul 25 12:01:41 EDT 2025
Mon Jul 21 06:02:41 EDT 2025
Thu Apr 24 22:51:49 EDT 2025
Tue Jul 01 01:58:12 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 5
Keywords stacked denoising autoencoder
emotion recognition
unsupervised representation learning
electroencephalogram (EEG)
multi-source fusion
DEAP dataset
Language English
License https://creativecommons.org/licenses/by/4.0
Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c469t-b61211ba4d6a03e890c602aeb6a4e464b48bd7d6c5e73fe90249dac8d6a3b1863
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-8435-007X
OpenAccessLink https://doaj.org/article/24e9ac76404c4c4fb90c2425f36fcfdd
PMID 35626462
PQID 2670145847
PQPubID 2032401
ParticipantIDs doaj_primary_oai_doaj_org_article_24e9ac76404c4c4fb90c2425f36fcfdd
pubmedcentral_primary_oai_pubmedcentral_nih_gov_9141449
proquest_miscellaneous_2671269719
proquest_journals_2670145847
pubmed_primary_35626462
crossref_primary_10_3390_e24050577
crossref_citationtrail_10_3390_e24050577
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20220420
PublicationDateYYYYMMDD 2022-04-20
PublicationDate_xml – month: 4
  year: 2022
  text: 20220420
  day: 20
PublicationDecade 2020
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
– name: Basel
PublicationTitle Entropy (Basel, Switzerland)
PublicationTitleAlternate Entropy (Basel)
PublicationYear 2022
Publisher MDPI AG
MDPI
Publisher_xml – name: MDPI AG
– name: MDPI
References Lee (ref_7) 2011; 54
ref_14
Hinton (ref_31) 2006; 18
ref_11
ref_10
Pedregosa (ref_30) 2011; 12
Nikias (ref_21) 1987; 75
Vincent (ref_29) 2010; 11
Hinton (ref_27) 2006; 313
ref_19
Huang (ref_24) 1998; 454
(ref_25) 2008; 26
Koelstra (ref_9) 2012; 3
ref_16
Murugappan (ref_22) 2008; 1
Soleymani (ref_8) 2012; 3
Higuchi (ref_17) 1988; 31
Picard (ref_6) 2001; 23
Schmidt (ref_23) 2001; 15
Bengio (ref_26) 2013; 35
ref_20
ref_1
ref_3
ref_2
ref_28
Ekman (ref_5) 1987; 53
Murali (ref_12) 2019; 7
Jenke (ref_18) 2014; 5
Becker (ref_13) 2020; 11
ref_4
Hjorth (ref_15) 1970; 29
References_xml – ident: ref_3
– volume: 75
  start-page: 869
  year: 1987
  ident: ref_21
  article-title: Bispectrum Estimation: A Digital Signal Processing Framework
  publication-title: Proc. IEEE
  doi: 10.1109/PROC.1987.13824
– volume: 26
  start-page: 303
  year: 2008
  ident: ref_25
  article-title: What is principal component analysis?
  publication-title: Nat Biotechnol.
  doi: 10.1038/nbt0308-303
– volume: 31
  start-page: 277
  year: 1988
  ident: ref_17
  article-title: Approach to an irregular time series on the basis of the fractal theory
  publication-title: Phys. D
  doi: 10.1016/0167-2789(88)90081-4
– volume: 1
  start-page: 299
  year: 2008
  ident: ref_22
  article-title: EEG feature extraction for classifying emotions using FCM and FKM
  publication-title: Int. J. Comput. Commun.
– volume: 7
  start-page: 77905
  year: 2019
  ident: ref_12
  article-title: An Efficient Mixture Model Approach in Brain-Machine Interface Systems for Extracting the Psychological Status of Mentally Impaired Persons Using EEG Signals
  publication-title: IEEE Access.
  doi: 10.1109/ACCESS.2019.2922047
– volume: 5
  start-page: 327
  year: 2014
  ident: ref_18
  article-title: Feature extraction and selection for emotion recognition from EEG
  publication-title: IEEE Trans. Nucl. Sci.
– volume: 23
  start-page: 1175
  year: 2001
  ident: ref_6
  article-title: Toward machine emotional intelligence: Analysis of affective physiological state
  publication-title: IEEE Trans. Pattern Anal.
  doi: 10.1109/34.954607
– volume: 313
  start-page: 504
  year: 2006
  ident: ref_27
  article-title: Reducing the dimensionality of data with neural networks
  publication-title: Science
  doi: 10.1126/science.1127647
– volume: 54
  start-page: 95
  year: 2011
  ident: ref_7
  article-title: Unsupervised learning of hierarchical representations with convolutional deep belief networks
  publication-title: Commun. ACM
  doi: 10.1145/2001269.2001295
– volume: 454
  start-page: 903
  year: 1998
  ident: ref_24
  article-title: The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis
  publication-title: Proc. R. Soc. Lond. A
  doi: 10.1098/rspa.1998.0193
– ident: ref_14
– volume: 18
  start-page: 1527
  year: 2006
  ident: ref_31
  article-title: A fast learning algorithm for deep belief nets
  publication-title: Neural Comput.
  doi: 10.1162/neco.2006.18.7.1527
– volume: 53
  start-page: 712
  year: 1987
  ident: ref_5
  article-title: Universals and Cultural Differences in the Judgments of Facial Expressions of Emotion
  publication-title: J. Pers. Soc. Psychol.
  doi: 10.1037/0022-3514.53.4.712
– ident: ref_4
  doi: 10.1007/978-3-540-45012-2_2
– ident: ref_10
  doi: 10.1109/EMBC.2016.7590834
– volume: 15
  start-page: 487
  year: 2001
  ident: ref_23
  article-title: Frontal brain electrical activity (eeg) distinguishes valence and intensity of musical emotions
  publication-title: Cogn. Emot.
  doi: 10.1080/02699930126048
– ident: ref_2
– volume: 3
  start-page: 18
  year: 2012
  ident: ref_9
  article-title: DEAP: A database for emotion analysis; Using physiological signals
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/T-AFFC.2011.15
– ident: ref_1
  doi: 10.21437/Eurospeech.2001-627
– ident: ref_28
  doi: 10.7551/mitpress/7503.003.0024
– volume: 11
  start-page: 244
  year: 2020
  ident: ref_13
  article-title: Emotion Recognition Based on High-Resolution EEG Recordings and Reconstructed Brain Sources
  publication-title: IEEE T Affect Comput.
  doi: 10.1109/TAFFC.2017.2768030
– volume: 3
  start-page: 42
  year: 2012
  ident: ref_8
  article-title: A multimodal database for affect recognition and implicit tagging
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/T-AFFC.2011.25
– ident: ref_11
  doi: 10.1609/aaai.v31i2.19105
– ident: ref_16
  doi: 10.1007/978-3-642-24571-8_58
– volume: 35
  start-page: 1798
  year: 2013
  ident: ref_26
  article-title: Representation learning: A review and new perspectives
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2013.50
– volume: 29
  start-page: 306
  year: 1970
  ident: ref_15
  article-title: EEG analysis based on time domain properties
  publication-title: Electroencephal. Clin. Neurophysiol.
  doi: 10.1016/0013-4694(70)90143-4
– ident: ref_19
– volume: 12
  start-page: 2825
  year: 2011
  ident: ref_30
  article-title: Scikit-learn: Machine learning in python
  publication-title: J. Mach. Learn Res.
– ident: ref_20
– volume: 11
  start-page: 3371
  year: 2010
  ident: ref_29
  article-title: Stacked denoising autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
  publication-title: J. Mach. Learn Res.
SSID ssj0023216
Score 2.3727598
Snippet In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
StartPage 577
SubjectTerms Accuracy
Algorithms
Arousal
Artificial intelligence
Classification
Computer science
Datasets
DEAP dataset
Deep learning
Discriminant analysis
electroencephalogram (EEG)
Electroencephalography
Emotion recognition
Emotional factors
Emotions
Experiments
Feature extraction
Machine learning
multi-source fusion
Neural networks
Neurosciences
Noise reduction
Physiology
stacked denoising autoencoder
Support vector machines
Training
unsupervised representation learning
Wavelet transforms
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1Lb9QwELagvXDh_QgUZBAHLlFjx7HjE9otu6qQqFCXSr1FfpaVIFm6mx_Dv2UmL7qoQrlY8Uh2Mp7xzHj8DSHvrYiMRy5Tn4UsFdz71NgQU-0zWRZKBWE7tM8zeXohPl8Wl0PAbTukVY46sVPUvnEYIz_mUuEJGCjTj5tfKVaNwtPVoYTGXXIIKrgE5-twvjj7ej65XDlnsscTysG5Pw6wf6FFrvZ2oQ6s_zYL899EyRs7z_IhuT-YjHTW8_gRuRPqx-TBWI6BDtL5hPxehZ_rdNVuUPy3wdMTHBteWIy10EVfsIeejylD0J4bpIMG2Jwgzp5-CnWzxvABnbW7BkEuPQwxu3HaQLskA2rossVAG20i7S7xpl8aD7PsMkpHhUpX6yvEZ35KLpaLbyen6VB5IXXgLu9Si7hizBrhpcnyUOrMyYybYKURQUhhRWm98tIVQeUxaMQd9MaVQJ5bVsr8GTmomzq8INRxzjzLQq67n290ZEWpXR6LUgXHyoR8GDlRuQGWHKtj_KjAPUGmVRPTEvJuIt30WBy3Ec2RnRMBwmd3L5rrq2qQxoqLoI1TUmTCwRMtfCD6XjGX0UXvE3I0LoZqkOlt9XcFJuTt1A3SiEcspg5N29EwLrViOiHP-7UzzSQHU1MKyROi9lbV3lT3e-r19w7xWzMBjq9--f9pvSL3OF7OyAToviNysLtuw2swmXb2zSAXfwBrpxuY
  priority: 102
  providerName: ProQuest
Title Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals
URI https://www.ncbi.nlm.nih.gov/pubmed/35626462
https://www.proquest.com/docview/2670145847
https://www.proquest.com/docview/2671269719
https://pubmed.ncbi.nlm.nih.gov/PMC9141449
https://doaj.org/article/24e9ac76404c4c4fb90c2425f36fcfdd
Volume 24
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3di9QwEA96vvgiil_Vc4nigy_lmjRNm8fdc_cOwUNuPdi3kk9dONvjbvvH-N86k36wKwe-SKGEZKDTZCYzk0x-IeSjEYHxwGXqMp-lgjuXauNDqlwmq6IsvTAR7fNCnl-JL5tis3fVF-aE9fDAfcedcOGVtqUUmbDwBKMyi25yyGWwwTmcfcHmjcHUEGrlnMkeRyiHoP7Eg91CT7w8sD4RpP8-z_LvBMk9i7N6Sp4MriKd9yw-Iw9885z8Xvtf23Td3aCO33lHT_FDUGFwQYUu-1t56OWYFwTlhUY6KIBjCTrr6GfftFtcI6DzbtcikqXzt3S-t6VAYyYB1XTV4WoabQONJ3XTr60DlmLa6Dhr0vX2B4IwvyBXq-X30_N0uF4htRAT71KD4GHMaOGkznJfQcfKjGtvpBZeSGFEZVzppC18mQevEFzQaVsBeW5YJfOX5KhpG_-aUMs5cyzzuYo9rVVgRaVsHoqq9JZVCfk0dnttB-xxvALjuoYYBEeonkYoIR8m0psecOM-ogWO3USAGNmxAiSnHiSn_pfkJOR4HPl6UNy7mssSN1rBZifk_dQMKof7KLrxbRdpGJeqZCohr3pBmTjJwZ-UQvKElAcidMDqYUuz_RlhvRUTEN2qN__j396SxxzPaWQCpsFjcrS77fw78J52ZkYeVquzGXm0WF58u5xFtYH32Yb9AZGnIKw
linkProvider Directory of Open Access Journals
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELZKOcCF9yNQwCCQuERNbK8THxDaPpYtfRzYVuot-JWyEiRLd1eIn8Kf4Dcy4zzooopblYsVjxIn8_CMPf6GkNdGlCkrmYxd4pNYMOdibXwZK5fIfJBlXpiA9nkkxyfi4-ngdI387s7CYFplZxODoXa1xTXyTSYz3AEDY_p-9j3GqlG4u9qV0GjEYt___AEh2_zd3g7w9w1jo93j7XHcVhWILYSCi9ggZlZqtHBSJ9znKrEyYdobqYUXUhiRG5c5aQc-46VXiKnntM2BnJs0lxyee41cF5wr1Kh89KEP8DhLZYNeBJ3JpofZEv3_bGXOC6UBLvNn_03LvDDPje6QW62DSoeNRN0la766R253xR9oawvuk18T_20aT5YzNDZz7-g2vhtuGFzZobtNeSD6qUtQgvaWRjpogIcLxsPRHV_VU1ysoMPlokZITQevGF7Y26AhpYFqOlrish6tSxqODMeHtYNRhvzVznzTyfQM0aAfkJMr4chDsl7VlX9MqGUsdWniuQo_X6syHeTK8nKQZ96meUTedpwobAuCjrU4vhYQDCHTip5pEXnVk84a5I_LiLaQnT0BgnWHG_X5WdHqfsGEV9pmUiTCwlUa-ECM9EouS1s6F5GNThiK1oLMi7_yHpGXfTfoPm7o6MrXy0CTMqmyVEXkUSM7_Ug4OLZSSBaRbEWqVoa62lNNvwR8cZUKCLPVk_8P6wW5MT4-PCgO9o72n5KbDI-FJAKs7gZZX5wv_TNw1hbmedAQSj5ftUr-AecpV5A
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9NAEF6VVEJceD8MBRYEEhcr6_Vm7T0glDSJWgpR1VKpN-N9lUhghyYR4qfwV_h1zPhFgypulS-WPbLXntlvZ3ZnvyHklRY-4p7L0DLHQsGtDXPtfKgsk-kgSZzQFdvnTO6diPeng9Mt8rvdC4NplS0mVkBtS4Nz5H0uE1wBAzDt-yYt4nA8fbf4HmIFKVxpbctp1CZy4H7-gPBt-XZ_DLp-zfl08ml3L2wqDIQGwsJVqJE_K9K5sDJnsUsVM5Lx3GmZCyek0CLVNrHSDFwSe6eQX8_mJgXxWEepjOG518h2AlER65Ht0WR2eNSFezGPZM1lFMeK9R2MnRgNJBsjYFUo4DLv9t8kzQuj3vQ2udm4q3RY29cdsuWKu-RWWwqCNshwj_w6dt_m4fF6gdCzdJbu4rvhgsZ5HjqpiwXRozZdCc5HOcrBCfi7ACWWjl1RznHqgg7XqxIJNi28YnhhpYNWCQ40p9M1TvLR0tNqA3H4sbTQyiqbtQVzejw_Q27o--TkSnTygPSKsnCPCDWcRzZiLlbVz8-VjwapMrEfpIkzURqQN60mMtNQomNljq8ZhEaotKxTWkBedqKLmgfkMqERqrMTQOru6kJ5fpY1SJBx4VRuEimYMHB4DR-IcZ-PpTfe2oDstMaQNXiyzP5af0BedLcBCXB5Jy9cua5kIi5VEqmAPKxtp2tJDG6uFJIHJNmwqo2mbt4p5l8qtnEVCQi61eP_N-s5uQ7dMfuwPzt4Qm5w3CPCBEDwDumtztfuKXhuK_2s6SKUfL7qXvkHSNBdIg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Semi-Supervised+Cross-Subject+Emotion+Recognition+Based+on+Stacked+Denoising+Autoencoder+Architecture+Using+a+Fusion+of+Multi-Modal+Physiological+Signals&rft.jtitle=Entropy+%28Basel%2C+Switzerland%29&rft.au=Luo%2C+Junhai&rft.au=Tian%2C+Yuxin&rft.au=Yu%2C+Hang&rft.au=Chen%2C+Yu&rft.date=2022-04-20&rft.issn=1099-4300&rft.eissn=1099-4300&rft.volume=24&rft.issue=5&rft_id=info:doi/10.3390%2Fe24050577&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1099-4300&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1099-4300&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1099-4300&client=summon