Contrastive Learning of Subject-Invariant EEG Representations for Cross-Subject Emotion Recognition

EEG signals have been reported to be informative and reliable for emotion recognition in recent years. However, the inter-subject variability of emotion-related EEG signals still poses a great challenge for the practical applications of EEG-based emotion recognition. Inspired by recent neuroscience...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on affective computing Vol. 14; no. 3; pp. 2496 - 2511
Main Authors Shen, Xinke, Liu, Xianggen, Hu, Xin, Zhang, Dan, Song, Sen
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.07.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract EEG signals have been reported to be informative and reliable for emotion recognition in recent years. However, the inter-subject variability of emotion-related EEG signals still poses a great challenge for the practical applications of EEG-based emotion recognition. Inspired by recent neuroscience studies on inter-subject correlation, we proposed a Contrastive Learning method for Inter-Subject Alignment (CLISA) to tackle the cross-subject emotion recognition problem. Contrastive learning was employed to minimize the inter-subject differences by maximizing the similarity in EEG signkal representations across subjects when they received the same emotional stimuli in contrast to different ones. Specifically, a convolutional neural network was applied to learn inter-subject aligned spatiotemporal representations from EEG time series in contrastive learning. The aligned representations were subsequently used to extract differential entropy features for emotion classification. CLISA achieved state-of-the-art cross-subject emotion recognition performance on our THU-EP dataset with 80 subjects and the publicly available SEED dataset with 15 subjects. It could generalize to unseen subjects or unseen emotional stimuli in testing. Furthermore, the spatiotemporal representations learned by CLISA could provide insights into the neural mechanisms of human emotion processing.
AbstractList EEG signals have been reported to be informative and reliable for emotion recognition in recent years. However, the inter-subject variability of emotion-related EEG signals still poses a great challenge for the practical applications of EEG-based emotion recognition. Inspired by recent neuroscience studies on inter-subject correlation, we proposed a Contrastive Learning method for Inter-Subject Alignment (CLISA) to tackle the cross-subject emotion recognition problem. Contrastive learning was employed to minimize the inter-subject differences by maximizing the similarity in EEG signkal representations across subjects when they received the same emotional stimuli in contrast to different ones. Specifically, a convolutional neural network was applied to learn inter-subject aligned spatiotemporal representations from EEG time series in contrastive learning. The aligned representations were subsequently used to extract differential entropy features for emotion classification. CLISA achieved state-of-the-art cross-subject emotion recognition performance on our THU-EP dataset with 80 subjects and the publicly available SEED dataset with 15 subjects. It could generalize to unseen subjects or unseen emotional stimuli in testing. Furthermore, the spatiotemporal representations learned by CLISA could provide insights into the neural mechanisms of human emotion processing.
Author Liu, Xianggen
Zhang, Dan
Song, Sen
Shen, Xinke
Hu, Xin
Author_xml – sequence: 1
  givenname: Xinke
  orcidid: 0000-0001-8531-5033
  surname: Shen
  fullname: Shen, Xinke
  email: sxk17@mails.tsinghua.edu.cn
  organization: Department of Biomedical Engineering and with the Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
– sequence: 2
  givenname: Xianggen
  surname: Liu
  fullname: Liu, Xianggen
  email: liuxianggen@scu.edu.cn
  organization: College of Computer Science, Sichuan University, Chengdu, Sichuan, China
– sequence: 3
  givenname: Xin
  orcidid: 0000-0003-0714-689X
  surname: Hu
  fullname: Hu, Xin
  email: huxin530@gmail.com
  organization: Department of Psychology and with the Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
– sequence: 4
  givenname: Dan
  orcidid: 0000-0002-7592-3200
  surname: Zhang
  fullname: Zhang, Dan
  email: dzhang@tsinghua.edu.cn
  organization: Department of Psychology and with the Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
– sequence: 5
  givenname: Sen
  orcidid: 0000-0001-5587-0730
  surname: Song
  fullname: Song, Sen
  email: songsen@tsinghua.edu.cn
  organization: Department of Biomedical Engineering and with the Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
BookMark eNp9kF9LwzAUxYNMcM59AX0p-NyZP03aPI7SzcFA0Plc0jQdGVsyk2zgtze1Q8QH85JD7vnde3NuwchYowC4R3CGEORPm_liUc4wxHhGEMsoYldgjHjGUwIzOvqlb8DU-x2MhxDCcD4GsrQmOOGDPqtkrYQz2mwT2yVvp2anZEhX5iycFiYkVbVMXtXRKa9MEEFb45POuqR01vv04k-qg-1L0Snt1uhe34HrTuy9ml7uCXhfVJvyOV2_LFflfJ1KwlBIm6aFqKUNy7EgsqE5ZQXMBcFZSyDjBeFFQTvSxrrkgncivuZUYskojyIjE_A49D06-3FSPtQ7e3ImjqxxwVjWd4DRVQwu2e_tVFdLPXwn5qD3NYJ1n2r9nWrdp1pfUo0o_oMenT4I9_k_9DBAWin1A_A8K-LW5AukEYRy
CODEN ITACBQ
CitedBy_id crossref_primary_10_1039_D4AY01970A
crossref_primary_10_1111_coin_12659
crossref_primary_10_1007_s00422_023_00967_8
crossref_primary_10_1109_TBME_2024_3432934
crossref_primary_10_1109_TAFFC_2023_3315973
crossref_primary_10_3390_en18051155
crossref_primary_10_1145_3674975
crossref_primary_10_1016_j_neucom_2024_128624
crossref_primary_10_1016_j_eswa_2024_126081
crossref_primary_10_11834_jig_230031
crossref_primary_10_1145_3712259
crossref_primary_10_1016_j_eswa_2024_126282
crossref_primary_10_3389_fnins_2025_1557287
crossref_primary_10_1109_TAFFC_2024_3433470
crossref_primary_10_3390_brainsci14070688
crossref_primary_10_3390_s24227324
crossref_primary_10_1007_s11571_024_10193_y
crossref_primary_10_1109_ACCESS_2023_3344531
crossref_primary_10_1016_j_engappai_2024_109502
crossref_primary_10_1016_j_engappai_2024_109825
crossref_primary_10_3390_electronics14020251
crossref_primary_10_1109_ACCESS_2025_3536549
crossref_primary_10_1109_JBHI_2023_3264521
crossref_primary_10_34133_cbsystems_0028
crossref_primary_10_1016_j_neunet_2025_107337
crossref_primary_10_1016_j_eswa_2024_125420
crossref_primary_10_3390_s24061894
crossref_primary_10_1016_j_eswa_2024_126028
crossref_primary_10_1016_j_neuroimage_2024_120890
crossref_primary_10_1109_JTEHM_2024_3506556
crossref_primary_10_31083_j_jin2301018
crossref_primary_10_1109_JBHI_2024_3486251
crossref_primary_10_1109_JBHI_2024_3357168
crossref_primary_10_1007_s10489_023_04971_0
crossref_primary_10_1109_TCDS_2024_3417534
crossref_primary_10_1007_s11257_024_09424_y
crossref_primary_10_1088_1741_2552_ad5048
crossref_primary_10_1049_cit2_12174
crossref_primary_10_1088_1741_2552_ad3986
crossref_primary_10_1109_JBHI_2024_3456441
crossref_primary_10_1016_j_aei_2024_102971
crossref_primary_10_1109_TCYB_2024_3415369
crossref_primary_10_1109_TBME_2023_3328942
crossref_primary_10_1080_09540091_2024_2426812
crossref_primary_10_1016_j_aej_2025_02_001
crossref_primary_10_1016_j_eswa_2024_125452
crossref_primary_10_1109_TAFFC_2024_3385651
crossref_primary_10_3390_e26070540
crossref_primary_10_1109_ACCESS_2023_3344476
crossref_primary_10_1038_s41597_023_02650_w
crossref_primary_10_1016_j_bspc_2025_107511
crossref_primary_10_1007_s13534_023_00316_5
crossref_primary_10_1016_j_jneumeth_2024_110223
crossref_primary_10_1109_TIM_2025_3533618
crossref_primary_10_7717_peerj_cs_2610
crossref_primary_10_1109_JBHI_2024_3452410
crossref_primary_10_1145_3666002
crossref_primary_10_1016_j_heliyon_2024_e31485
crossref_primary_10_1016_j_neunet_2023_12_034
crossref_primary_10_3390_brainsci13091326
crossref_primary_10_1007_s11571_024_10205_x
Cites_doi 10.1109/TKDE.2021.3119326
10.1109/TAFFC.2020.3025777
10.1109/TAFFC.2017.2714671
10.1109/MCI.2015.2501545
10.1162/089976698300017467
10.1016/j.tics.2016.03.011
10.1109/CVPR.2017.195
10.1126/science.1089506
10.1007/978-981-13-0908-3_10
10.1093/scan/nsz037
10.1109/NER.2013.6695876
10.1093/cercor/bhv001
10.1038/s41598-018-19707-1
10.1109/TAFFC.2017.2660485
10.1109/TNN.2010.2091281
10.1109/TAFFC.2020.2994159
10.1109/TBME.2019.2897651
10.1109/TCDS.2020.2999337
10.1371/journal.pcbi.1004066
10.1038/ncomms5567
10.1109/TAFFC.2018.2817622
10.1016/S0959-4388(02)00301-X
10.1109/TAFFC.2021.3130387
10.1037/1528-3542.7.4.715
10.1109/TAFFC.2018.2885474
10.1109/JBHI.2020.2995767
10.1609/aaai.v35i1.16169
10.1155/2011/156869
10.1016/S0065-2601(08)00404-8
10.1016/j.knosys.2017.05.010
10.1186/1475-925X-9-45
10.3390/s17051014
10.1016/j.compbiomed.2016.10.019
10.1109/JBHI.2017.2688239
10.1155/2011/879716
10.1145/2647868.2654916
10.3389/fnhum.2019.00120
10.1016/j.tics.2012.07.006
10.1109/TCDS.2019.2949306
10.1109/TAFFC.2017.2786207
10.1109/IJCNN.2010.5596746
10.1111/sjop.12209
10.1109/TAFFC.2018.2849758
10.1177/2096595819896200
10.1371/journal.pcbi.1009284
10.1109/EMBC44109.2020.9175482
10.1109/CVPR.2017.243
10.1093/scan/nsab031
10.1038/srep01692
10.1109/TAFFC.2020.3008775
10.1016/j.neucom.2020.07.123
10.1080/02699930903274322
10.1080/02699930802204677
10.1109/TAMD.2015.2431497
10.1006/nimg.2001.0825
10.3390/s18072074
10.1093/bib/bbab109
10.1109/TCYB.2019.2904052
10.3389/fnins.2021.626277
10.1109/T-AFFC.2011.15
10.1016/j.conb.2004.03.010
10.3389/fpsyg.2014.00996
10.1109/NER49283.2021.9441368
10.1080/2326263X.2014.912881
10.1109/TCDS.2020.3007453
10.1162/neco.1995.7.6.1129
10.1109/TAFFC.2014.2339834
10.1088/1741-2552/abca18
10.1109/TAFFC.2017.2712143
10.3389/fnhum.2012.00112
10.1080/02699931.2016.1140020
10.1109/TAFFC.2021.3051332
10.3389/fnhum.2017.00026
10.1016/j.knosys.2020.106243
10.1371/journal.pone.0128833
10.1007/978-3-030-36708-4_3
10.1016/j.neuroimage.2013.10.067
10.1016/j.neuroimage.2018.01.035
10.1007/11866763_8
10.1007/s11432-021-3439-2
10.1007/s11517-016-1479-8
10.1093/acprof:oso/9780195050387.001.0001
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TAFFC.2022.3164516
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore
CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1949-3045
EndPage 2511
ExternalDocumentID 10_1109_TAFFC_2022_3164516
9748967
Genre orig-research
GrantInformation_xml – fundername: Guoqiang Institute
– fundername: National Key Research and Development Program of China
  grantid: 2021ZD0200300
– fundername: National Natural Science Foundation of China
  grantid: 61977041; 61836004
  funderid: 10.13039/501100001809
– fundername: Beijing Academy of Artificial Intelligence
– fundername: Spring Breeze Fund
  grantid: 2021Z99CFY037
– fundername: Center for Brain-inspired Computing Research, Beijing Innovation Center for Future Chips
– fundername: Tsinghua University Initiative Scientific Research Program
  grantid: 20197010009
– fundername: IDG/McGovern Institute for Brain Research
– fundername: Tsinghua University
  funderid: 10.13039/501100004147
– fundername: Key Scientific Technological Innovation Project by Ministry of Education
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABJNI
ABQJQ
ABVLG
AENEX
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
M43
O9-
OCL
PQQKQ
RIA
RIE
RNI
RZB
AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c361t-bbd01d5b672a3cb5756807a324d3069839885f3d672c9a9fa30675c2c65967543
IEDL.DBID RIE
ISSN 1949-3045
IngestDate Sun Jun 29 16:21:56 EDT 2025
Thu Apr 24 22:59:19 EDT 2025
Tue Jul 01 02:57:54 EDT 2025
Wed Aug 27 02:47:58 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 3
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c361t-bbd01d5b672a3cb5756807a324d3069839885f3d672c9a9fa30675c2c65967543
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-8531-5033
0000-0002-7592-3200
0000-0001-5587-0730
0000-0003-0714-689X
PQID 2866483980
PQPubID 2040414
PageCount 16
ParticipantIDs proquest_journals_2866483980
ieee_primary_9748967
crossref_citationtrail_10_1109_TAFFC_2022_3164516
crossref_primary_10_1109_TAFFC_2022_3164516
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-07-01
PublicationDateYYYYMMDD 2023-07-01
PublicationDate_xml – month: 07
  year: 2023
  text: 2023-07-01
  day: 01
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE transactions on affective computing
PublicationTitleAbbrev TAFFC
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref59
ref58
ref53
ref54
ref50
ref46
ref45
ref48
ref42
ref41
ref44
ref43
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref100
ref101
ref40
ref35
ref34
amari (ref83) 1995
ref37
shi (ref10) 2013
ref36
ref31
ref30
ref33
ref32
cui (ref20) 2020; 205
ref38
oord (ref62) 2018
sundararajan (ref92) 2017
chai (ref51) 2017; 17
ref24
ref23
ref26
ref25
ref22
nair (ref90) 2010
ref21
loshchilov (ref86) 2016
ref28
ref27
fernando (ref52) 2014
ref29
devlin (ref56) 2019
ganin (ref39) 2016; 17
ref13
ref12
ref15
ref14
pascual-marqui (ref97) 2002; 24
ekman (ref76) 1994
ref96
ref11
ref99
ref98
mohsenvand (ref60) 2020
ref17
zheng (ref47) 2016
ref16
ref19
ref18
chen (ref55) 2020
kingma (ref85) 2015
ref93
ref95
ref94
ref89
ref88
ref87
parra (ref91) 2018
ref82
ref81
ref84
clevert (ref71) 2016
ref80
ref79
ref78
lawhern (ref70) 2016; 15
ref75
ref74
ref77
ref102
ref2
ref1
hu (ref64) 2021; 147
ref73
ref72
ref67
he (ref68) 2016
ref69
ref63
ref66
ref65
ref61
References_xml – ident: ref59
  doi: 10.1109/TKDE.2021.3119326
– ident: ref21
  doi: 10.1109/TAFFC.2020.3025777
– ident: ref2
  doi: 10.1109/TAFFC.2017.2714671
– ident: ref27
  doi: 10.1109/MCI.2015.2501545
– ident: ref49
  doi: 10.1162/089976698300017467
– ident: ref35
  doi: 10.1016/j.tics.2016.03.011
– start-page: 6627
  year: 2013
  ident: ref10
  article-title: Differential entropy feature for EEG-based vigilance estimation
  publication-title: Proc IEEE Conf Eng Med Biol Soc
– ident: ref72
  doi: 10.1109/CVPR.2017.195
– ident: ref43
  doi: 10.1126/science.1089506
– ident: ref46
  doi: 10.1007/978-981-13-0908-3_10
– ident: ref63
  doi: 10.1093/scan/nsz037
– ident: ref9
  doi: 10.1109/NER.2013.6695876
– volume: 24
  start-page: 5
  year: 2002
  ident: ref97
  article-title: Standardized low-resolution brain electromagnetic tomography (Sloreta): Technical details
  publication-title: Methods Find Exp Clin Pharmacol
– ident: ref99
  doi: 10.1093/cercor/bhv001
– ident: ref67
  doi: 10.1038/s41598-018-19707-1
– ident: ref78
  doi: 10.1109/TAFFC.2017.2660485
– ident: ref48
  doi: 10.1109/TNN.2010.2091281
– ident: ref13
  doi: 10.1109/TAFFC.2020.2994159
– volume: 15
  start-page: 56013.1
  year: 2016
  ident: ref70
  article-title: EEGNet: A compact convolutional network for EEG-based brain-computer interfaces
  publication-title: J Neural Eng
– ident: ref18
  doi: 10.1109/TBME.2019.2897651
– start-page: 238
  year: 2020
  ident: ref60
  article-title: Contrastive representation learning for electroencephalogram classification
  publication-title: Proc Mach Learn Health NeurIPS Workshop
– ident: ref12
  doi: 10.1109/TCDS.2020.2999337
– ident: ref100
  doi: 10.1371/journal.pcbi.1004066
– year: 2016
  ident: ref86
  article-title: SGDR: Stochastic gradient descent with warm restarts
  publication-title: Proc Int Conf Learn Representation
– ident: ref41
  doi: 10.1038/ncomms5567
– ident: ref16
  doi: 10.1109/TAFFC.2018.2817622
– start-page: 4171
  year: 2019
  ident: ref56
  article-title: Bert: Pre-training of deep bidirectional transformers for language understanding
  publication-title: Proc Conf North Amer Chapter Assoc Comput Linguistics
– ident: ref34
  doi: 10.1016/S0959-4388(02)00301-X
– ident: ref22
  doi: 10.1109/TAFFC.2021.3130387
– ident: ref79
  doi: 10.1037/1528-3542.7.4.715
– ident: ref11
  doi: 10.1109/TAFFC.2018.2885474
– year: 2018
  ident: ref91
  article-title: Correlated components analysis - extracting reliable dimensions in multivariate data
– ident: ref7
  doi: 10.1109/JBHI.2020.2995767
– ident: ref54
  doi: 10.1609/aaai.v35i1.16169
– ident: ref80
  doi: 10.1155/2011/156869
– ident: ref98
  doi: 10.1016/S0065-2601(08)00404-8
– ident: ref88
  doi: 10.1016/j.knosys.2017.05.010
– ident: ref95
  doi: 10.1186/1475-925X-9-45
– start-page: 807
  year: 2010
  ident: ref90
  article-title: Rectified linear units improve restricted boltzmann machines
  publication-title: Proc Int Conf Mach Learn
– volume: 17
  year: 2017
  ident: ref51
  article-title: A fast, efficient domain adaptation technique for cross-domain electroencephalography (EEG)-based emotion recognition
  publication-title: SENSORS
  doi: 10.3390/s17051014
– year: 2018
  ident: ref62
  article-title: Representation learning with contrastive predictive coding
– ident: ref53
  doi: 10.1016/j.compbiomed.2016.10.019
– ident: ref24
  doi: 10.1109/JBHI.2017.2688239
– ident: ref94
  doi: 10.1155/2011/879716
– ident: ref50
  doi: 10.1145/2647868.2654916
– start-page: 770
  year: 2016
  ident: ref68
  article-title: Deep residual learning for image recognition
  publication-title: Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit
– start-page: 2732
  year: 2016
  ident: ref47
  article-title: Personalizing EEG-based affective models with transfer learning
  publication-title: Proc Int Joint Artif Intell Conf
– ident: ref74
  doi: 10.3389/fnhum.2019.00120
– ident: ref37
  doi: 10.1016/j.tics.2012.07.006
– ident: ref29
  doi: 10.1109/TCDS.2019.2949306
– volume: 17
  start-page: 2096
  year: 2016
  ident: ref39
  article-title: Domain-adversarial training of neural networks
  publication-title: J Mach Learn Res
– ident: ref31
  doi: 10.1109/TAFFC.2017.2786207
– ident: ref89
  doi: 10.1109/IJCNN.2010.5596746
– ident: ref75
  doi: 10.1111/sjop.12209
– ident: ref4
  doi: 10.1109/TAFFC.2018.2849758
– ident: ref5
  doi: 10.1177/2096595819896200
– start-page: 2960
  year: 2014
  ident: ref52
  article-title: Unsupervised visual domain adaptation using subspace alignment
  publication-title: Proc IEEE Int Conf Comput Vis
– ident: ref58
  doi: 10.1371/journal.pcbi.1009284
– start-page: 3319
  year: 2017
  ident: ref92
  article-title: Axiomatic attribution for deep networks
  publication-title: Proc Int Conf Mach Learn
– ident: ref19
  doi: 10.1109/EMBC44109.2020.9175482
– ident: ref69
  doi: 10.1109/CVPR.2017.243
– year: 1994
  ident: ref76
  publication-title: The Nature of Emotion Fundamental Questions
– ident: ref45
  doi: 10.1093/scan/nsab031
– ident: ref44
  doi: 10.1038/srep01692
– ident: ref32
  doi: 10.1109/TAFFC.2020.3008775
– ident: ref33
  doi: 10.1016/j.neucom.2020.07.123
– ident: ref77
  doi: 10.1080/02699930903274322
– ident: ref36
  doi: 10.1080/02699930802204677
– ident: ref14
  doi: 10.1109/TAMD.2015.2431497
– ident: ref66
  doi: 10.1006/nimg.2001.0825
– ident: ref1
  doi: 10.3390/s18072074
– ident: ref57
  doi: 10.1093/bib/bbab109
– ident: ref25
  doi: 10.1109/TCYB.2019.2904052
– ident: ref84
  doi: 10.3389/fnins.2021.626277
– ident: ref23
  doi: 10.1109/T-AFFC.2011.15
– ident: ref30
  doi: 10.1016/j.conb.2004.03.010
– ident: ref101
  doi: 10.3389/fpsyg.2014.00996
– year: 2016
  ident: ref71
  article-title: Fast and accurate deep network learning by exponential linear units (Elus)
  publication-title: Proc Int Conf Learn Represent
– ident: ref28
  doi: 10.1109/NER49283.2021.9441368
– ident: ref3
  doi: 10.1080/2326263X.2014.912881
– ident: ref38
  doi: 10.1109/TCDS.2020.3007453
– ident: ref82
  doi: 10.1162/neco.1995.7.6.1129
– ident: ref6
  doi: 10.1109/TAFFC.2014.2339834
– ident: ref61
  doi: 10.1088/1741-2552/abca18
– ident: ref15
  doi: 10.1109/TAFFC.2017.2712143
– ident: ref42
  doi: 10.3389/fnhum.2012.00112
– ident: ref102
  doi: 10.1080/02699931.2016.1140020
– ident: ref17
  doi: 10.1109/TAFFC.2021.3051332
– volume: 147
  year: 2021
  ident: ref64
  article-title: Similar brains blend emotion in similar ways: Neural representations of individual difference in emotion profiles
  publication-title: NeuroImage
– year: 2015
  ident: ref85
  article-title: Adam: A method for stochastic optimization
  publication-title: Proc Int Conf Learn Representation
– ident: ref73
  doi: 10.3389/fnhum.2017.00026
– volume: 205
  year: 2020
  ident: ref20
  article-title: EEG-based emotion recognition using an end-to-end regional-asymmetric convolution-al neural network
  publication-title: Knowl Based Syst
  doi: 10.1016/j.knosys.2020.106243
– ident: ref40
  doi: 10.1371/journal.pone.0128833
– ident: ref26
  doi: 10.1007/978-3-030-36708-4_3
– ident: ref93
  doi: 10.1016/j.neuroimage.2013.10.067
– start-page: 757
  year: 1995
  ident: ref83
  article-title: A new learning algorithm for blind signal separation
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref81
  doi: 10.1016/j.neuroimage.2018.01.035
– ident: ref96
  doi: 10.1007/11866763_8
– ident: ref8
  doi: 10.1007/s11432-021-3439-2
– ident: ref87
  doi: 10.1007/s11517-016-1479-8
– start-page: 1597
  year: 2020
  ident: ref55
  article-title: A simple framework for contrastive learning of visual representations
  publication-title: Proc Int Conf Mach Learn
– ident: ref65
  doi: 10.1093/acprof:oso/9780195050387.001.0001
SSID ssj0000333627
Score 2.6214821
Snippet EEG signals have been reported to be informative and reliable for emotion recognition in recent years. However, the inter-subject variability of...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2496
SubjectTerms Artificial neural networks
Brain modeling
brain-computer interface
contrastive learning
cross-subject
Datasets
EEG
Electroencephalography
Emotion recognition
Emotions
Feature extraction
Learning
Neuroscience
Representations
Stimuli
Testing
Training
Title Contrastive Learning of Subject-Invariant EEG Representations for Cross-Subject Emotion Recognition
URI https://ieeexplore.ieee.org/document/9748967
https://www.proquest.com/docview/2866483980
Volume 14
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV27TsMwFL0qnVgoUBCFgjywgds8nWSsqoSCBEPVSt2i2HEYQClqUwa-nmvHicRDiC1SbMnOsa_Pce4D4DqzeaZ4OpVC5tRzgoJyP0SpIvF88i3JQqGzfT6x2dJ7WPmrDty2sTBSSu18JkfqUf_Lz9dip67KxpFKlcKCPdhD4VbHarX3KZbroi0OmrgYKxovJkkyRQXoOChMmSpI--Xs0cVUflhgfawkPXhsBlR7k7yMdhUfiY9vuRr_O-JDODD8kkzqBXEEHVkeQ6-p3UDMVu6DUGmpNtlWWTtikqw-k3VB0JKoqxl6X76jjMbvTuL4jsy1w6yJUyq3BKkumapJUtOexHU9IDJvPJLW5Qksk3gxnVFTcIEKl9kV5Ty37NznLHAyV3Bkciy0ggw5V47KIkIuFYZ-4eb4XkRZVGRabwhHMB8n6XvuKXTLdSnPgHihz1E7spC5nsdRpbEidxlH8pMXtiiKAdgNFKkw2chVUYzXVKsSK0o1fKmCLzXwDeCm7fNW5-L4s3Vf4dG2NFAMYNggnprtuk2dkDFPTc86_73XBeyrOvO1n-4QutVmJy-RjVT8Si_DT4Rm3As
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED6VMsBCgYIoFPDABil52UnGqmppoe1QtVK3KHYcBlCK-mDg13N2nEo8hNgixZZ8Ofvu-5x7ANwkDk8UTrekkKnlu0FmcRoiVZHon6gtWSh0tc8x68_8xzmdV-BumwsjpdTBZ7KlHvW__HQhNuqq7D5SpVJYsAO76PepU2RrbW9UbM9DaxyUmTF2dD9t93od5ICui9SUqZa0X7yPbqfywwZrx9KrwahcUhFP8tLarHlLfHyr1vjfNR_CgUGYpF1siSOoyPwYamX3BmIOcx2EKky1TFbK3hFTZvWZLDKCtkRdzliD_B2JNH550u0-kIkOmTWZSvmKINglHSWkZcaTbtERiEzKmKRFfgKzXnfa6Vum5YIlPOasLc5T20kpZ4GbeIIjlmOhHSSIulLkFhGiqTCkmZfiexElUZZoxiFcwSgKSX3vFKr5IpdnQPyQcmSPLGSe73PkaSxLPcYR_qSZI7KsAU6piliYeuSqLcZrrHmJHcVafbFSX2zU14Db7Zy3ohrHn6PrSh_bkUYVDWiWGo_NgV3FbsiYr8Szz3-fdQ17_eloGA8H46cL2Fdd54uo3SZU18uNvERssuZXekt-AicZ31Q
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Contrastive+Learning+of+Subject-Invariant+EEG+Representations+for+Cross-Subject+Emotion+Recognition&rft.jtitle=IEEE+transactions+on+affective+computing&rft.au=Shen%2C+Xinke&rft.au=Liu%2C+Xianggen&rft.au=Hu%2C+Xin&rft.au=Zhang%2C+Dan&rft.date=2023-07-01&rft.issn=1949-3045&rft.eissn=1949-3045&rft.volume=14&rft.issue=3&rft.spage=2496&rft.epage=2511&rft_id=info:doi/10.1109%2FTAFFC.2022.3164516&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TAFFC_2022_3164516
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1949-3045&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1949-3045&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1949-3045&client=summon