Generative Listener EEG for Speech Emotion Recognition Using Generative Adversarial Networks With Compressed Sensing

Currently, emotional features in speech emotion recognition are typically extracted from the speeches, However, recognition accuracy can be influenced by factors such as semantics, language, and cross-speech datasets. Achieving consistent emotional judgment with human listeners is a key challenge fo...

Full description

Saved in:
Bibliographic Details
Published inIEEE journal of biomedical and health informatics Vol. 28; no. 4; pp. 2025 - 2036
Main Authors Chang, Jiang, Zhang, Zhixin, Wang, Zelin, Li, Jiacheng, Meng, Linsheng, Lin, Pan
Format Journal Article
LanguageEnglish
Published United States IEEE 01.04.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Currently, emotional features in speech emotion recognition are typically extracted from the speeches, However, recognition accuracy can be influenced by factors such as semantics, language, and cross-speech datasets. Achieving consistent emotional judgment with human listeners is a key challenge for AI to address. Electroencephalography (EEG) signals prove to be an effective means of capturing authentic and meaningful emotional information in humans. This positions EEG as a promising tool for detecting emotional cues conveyed in speech. In this study, we proposed a novel approach named CS-GAN that generates listener EEGs in response to a speaker's speech, specifically aimed at enhancing cross-subject emotion recognition. We utilized generative adversarial networks (GANs) to establish a mapping relationship between speech and EEGs to generate stimulus-induced EEGs. Furthermore, we integrated compressive sensing theory (CS) into the GAN-based EEG generation method, thereby enhancing the fidelity and diversity of the generated EEGs. The generated EEGs were then processed using a CNN-LSTM model to identify the emotional categories conveyed in the speech. By averaging these EEGs, we obtained the event-related potentials (ERPs) to improve the cross-subject capability of the method. The experimental results demonstrate that the generated EEGs by this method outperform real listener EEGs by 9.31% in cross-subject emotion recognition tasks. Furthermore, the ERPs show an improvement of 43.59%, providing evidence for the effectiveness of this method in cross-subject emotion recognition.
AbstractList Currently, emotional features in speech emotion recognition are typically extracted from the speeches, However, recognition accuracy can be influenced by factors such as semantics, language, and cross-speech datasets. Achieving consistent emotional judgment with human listeners is a key challenge for AI to address. Electroencephalography (EEG) signals prove to be an effective means of capturing authentic and meaningful emotional information in humans. This positions EEG as a promising tool for detecting emotional cues conveyed in speech. In this study, we proposed a novel approach named CS-GAN that generates listener EEGs in response to a speaker's speech, specifically aimed at enhancing cross-subject emotion recognition. We utilized generative adversarial networks (GANs) to establish a mapping relationship between speech and EEGs to generate stimulus-induced EEGs. Furthermore, we integrated compressive sensing theory (CS) into the GAN-based EEG generation method, thereby enhancing the fidelity and diversity of the generated EEGs. The generated EEGs were then processed using a CNN-LSTM model to identify the emotional categories conveyed in the speech. By averaging these EEGs, we obtained the event-related potentials (ERPs) to improve the cross-subject capability of the method. The experimental results demonstrate that the generated EEGs by this method outperform real listener EEGs by 9.31% in cross-subject emotion recognition tasks. Furthermore, the ERPs show an improvement of 43.59%, providing evidence for the effectiveness of this method in cross-subject emotion recognition.Currently, emotional features in speech emotion recognition are typically extracted from the speeches, However, recognition accuracy can be influenced by factors such as semantics, language, and cross-speech datasets. Achieving consistent emotional judgment with human listeners is a key challenge for AI to address. Electroencephalography (EEG) signals prove to be an effective means of capturing authentic and meaningful emotional information in humans. This positions EEG as a promising tool for detecting emotional cues conveyed in speech. In this study, we proposed a novel approach named CS-GAN that generates listener EEGs in response to a speaker's speech, specifically aimed at enhancing cross-subject emotion recognition. We utilized generative adversarial networks (GANs) to establish a mapping relationship between speech and EEGs to generate stimulus-induced EEGs. Furthermore, we integrated compressive sensing theory (CS) into the GAN-based EEG generation method, thereby enhancing the fidelity and diversity of the generated EEGs. The generated EEGs were then processed using a CNN-LSTM model to identify the emotional categories conveyed in the speech. By averaging these EEGs, we obtained the event-related potentials (ERPs) to improve the cross-subject capability of the method. The experimental results demonstrate that the generated EEGs by this method outperform real listener EEGs by 9.31% in cross-subject emotion recognition tasks. Furthermore, the ERPs show an improvement of 43.59%, providing evidence for the effectiveness of this method in cross-subject emotion recognition.
Currently, emotional features in speech emotion recognition are typically extracted from the speeches, However, recognition accuracy can be influenced by factors such as semantics, language, and cross-speech datasets. Achieving consistent emotional judgment with human listeners is a key challenge for AI to address. Electroencephalography (EEG) signals prove to be an effective means of capturing authentic and meaningful emotional information in humans. This positions EEG as a promising tool for detecting emotional cues conveyed in speech. In this study, we proposed a novel approach named CS-GAN that generates listener EEGs in response to a speaker's speech, specifically aimed at enhancing cross-subject emotion recognition. We utilized generative adversarial networks (GANs) to establish a mapping relationship between speech and EEGs to generate stimulus-induced EEGs. Furthermore, we integrated compressive sensing theory (CS) into the GAN-based EEG generation method, thereby enhancing the fidelity and diversity of the generated EEGs. The generated EEGs were then processed using a CNN-LSTM model to identify the emotional categories conveyed in the speech. By averaging these EEGs, we obtained the event-related potentials (ERPs) to improve the cross-subject capability of the method. The experimental results demonstrate that the generated EEGs by this method outperform real listener EEGs by 9.31% in cross-subject emotion recognition tasks. Furthermore, the ERPs show an improvement of 43.59%, providing evidence for the effectiveness of this method in cross-subject emotion recognition.
Author Lin, Pan
Wang, Zelin
Meng, Linsheng
Zhang, Zhixin
Chang, Jiang
Li, Jiacheng
Author_xml – sequence: 1
  givenname: Jiang
  orcidid: 0000-0001-9726-902X
  surname: Chang
  fullname: Chang, Jiang
  email: changj@sxu.edu.cn
  organization: Institute of Big Data Science and Industry, the School of Computer and Information Technology, Shanxi University, Taiyuan, China
– sequence: 2
  givenname: Zhixin
  orcidid: 0009-0006-6520-4129
  surname: Zhang
  fullname: Zhang, Zhixin
  email: zzx20221255@163.com
  organization: Institute of Big Data Science and Industry, the School of Computer and Information Technology, Shanxi University, Taiyuan, China
– sequence: 3
  givenname: Zelin
  orcidid: 0009-0003-5405-7432
  surname: Wang
  fullname: Wang, Zelin
  email: 1459937466@qq.com
  organization: Institute of Big Data Science and Industry, the School of Computer and Information Technology, Shanxi University, Taiyuan, China
– sequence: 4
  givenname: Jiacheng
  orcidid: 0009-0008-7265-5985
  surname: Li
  fullname: Li, Jiacheng
  email: jiachengli0420@163.com
  organization: Institute of Big Data Science and Industry, the School of Computer and Information Technology, Shanxi University, Taiyuan, China
– sequence: 5
  givenname: Linsheng
  orcidid: 0009-0003-3255-2416
  surname: Meng
  fullname: Meng, Linsheng
  email: 389522854@qq.com
  organization: School of Physical Education, Shanxi University, Taiyuan, China
– sequence: 6
  givenname: Pan
  orcidid: 0000-0001-7473-905X
  surname: Lin
  fullname: Lin, Pan
  email: linpan@hunnu.edu.cn
  organization: Center for Mind&Brain Sciences and Institute of Interdisciplinary Studies, Hunan Normal University, Changsha, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38289847$$D View this record in MEDLINE/PubMed
BookMark eNp9kc1qGzEURkVJaFI3D1AoRZBNN3b1OyMtU-M6CaaFpqFLIWvuJEpnJFeSE_r2naltCFlUCHQlzrmI-71BRyEGQOgdJTNKif50_fnyasYIEzPOK0IlfYVOGa3UlDGijg411eIEneX8QIalhiddvUYnXDGllahPUVlCgGSLfwS88rmMN7xYLHEbE77ZALh7vOhj8THg7-DiXfD_6tvswx1-Jl80j5CyTd52-CuUp5h-ZfzTl3s8j_0mQc7Q4BsIo_cWHbe2y3C2Pyfo9svix_xyuvq2vJpfrKaOS1Kmtl0z7ZQkVEspCLFCu4auxdpxkJLVjhIpq1oIqWTLnLWcU1sr12giKgqcT9DHXd9Nir-3kIvpfXbQdTZA3GbDNCOypqyuBvT8BfoQtykMvzOcDAOmmg97gj7sqe26h8Zsku9t-mMO8xyAege4FHNO0Brnix0nVpL1naHEjOGZMTwzhmf24Q0mfWEemv_Peb9zPAA84wWtKsn5XxyToz4
CODEN IJBHA9
CitedBy_id crossref_primary_10_3390_a17080329
crossref_primary_10_1016_j_bspc_2025_107636
Cites_doi 10.1016/j.measurement.2019.107117
10.1038/s41598-018-37359-z
10.1109/tnsre.2021.3125023
10.1155/2021/2520394
10.1109/msp.2007.914731
10.1109/EMBC.2018.8512865
10.5772/intechopen.94574
10.1109/ICHMS53169.2021.9582457
10.1109/IJCNN48605.2020.9206942
10.4103/0972-6748.57865
10.1109/taffc.2022.3170369
10.1109/tit.2006.871582
10.1016/j.tics.2005.11.009
10.1109/access.2018.2813358
10.1109/taffc.2020.3025777
10.1088/1741-2552/ace73f
10.3389/fnhum.2020.557534
10.1109/acssc.2009.5469828
10.5555/2969033.2969125
10.1109/MSP.2007.914731
10.1007/978-3-030-22796-8_16
10.1016/j.ijpsycho.2013.06.025
10.3389/fnins.2021.642251
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7QF
7QO
7QQ
7SC
7SE
7SP
7SR
7TA
7TB
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
K9.
KR7
L7M
L~C
L~D
NAPCQ
P64
7X8
DOI 10.1109/JBHI.2024.3360151
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005-present
IEEE All-Society Periodicals Package (ASPP) 1998-Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Ceramic Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
ProQuest Health & Medical Complete (Alumni)
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Nursing & Allied Health Premium
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Materials Research Database
Civil Engineering Abstracts
Aluminium Industry Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Health & Medical Complete (Alumni)
Ceramic Abstracts
Materials Business File
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Aerospace Database
Nursing & Allied Health Premium
Engineered Materials Abstracts
Biotechnology Research Abstracts
Solid State and Superconductivity Abstracts
Engineering Research Database
Corrosion Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
Materials Research Database

PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Xplore Digital Library
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
EISSN 2168-2208
EndPage 2036
ExternalDocumentID 38289847
10_1109_JBHI_2024_3360151
10416653
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Shanxi Province Foundation for Youths
  grantid: 20210302124556
– fundername: Fundamental Research Program of Shanxi Province
  grantid: 202303021211023; 202303021221075
– fundername: National Natural Science Foundation of China
  grantid: 62071177
  funderid: 10.13039/501100001809
– fundername: Central guidance for Local scientific and technological development funds
  grantid: YDZJSX20231B001
– fundername: special fund for Science and Technology lnnovation Teams of Shanxi Province
GroupedDBID 0R~
4.4
6IF
6IH
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
HZ~
IFIPE
IPLJI
JAVBF
M43
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
RIG
NPM
7QF
7QO
7QQ
7SC
7SE
7SP
7SR
7TA
7TB
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
K9.
KR7
L7M
L~C
L~D
NAPCQ
P64
7X8
ID FETCH-LOGICAL-c350t-afb29c8501955400a49cd1b4bc3e5527c10556744585f2caa331a78cd90461e33
IEDL.DBID RIE
ISSN 2168-2194
2168-2208
IngestDate Thu Jul 10 22:53:24 EDT 2025
Sun Jun 29 15:30:13 EDT 2025
Mon Jul 21 06:07:19 EDT 2025
Tue Jul 01 03:00:08 EDT 2025
Thu Apr 24 23:03:41 EDT 2025
Wed Aug 27 02:17:08 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c350t-afb29c8501955400a49cd1b4bc3e5527c10556744585f2caa331a78cd90461e33
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0009-0006-6520-4129
0009-0008-7265-5985
0000-0001-9726-902X
0009-0003-3255-2416
0000-0001-7473-905X
0009-0003-5405-7432
PMID 38289847
PQID 3033619319
PQPubID 85417
PageCount 12
ParticipantIDs pubmed_primary_38289847
proquest_miscellaneous_2920571276
crossref_citationtrail_10_1109_JBHI_2024_3360151
crossref_primary_10_1109_JBHI_2024_3360151
ieee_primary_10416653
proquest_journals_3033619319
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2024-04-01
PublicationDateYYYYMMDD 2024-04-01
PublicationDate_xml – month: 04
  year: 2024
  text: 2024-04-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE journal of biomedical and health informatics
PublicationTitleAbbrev JBHI
PublicationTitleAlternate IEEE J Biomed Health Inform
PublicationYear 2024
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
Abhang (ref6) 2016
ref12
ref15
ref14
Sorkhabi (ref25) 2014; 2
ref11
ref10
ref2
ref1
ref17
ref16
ref19
ref18
Hartmann (ref8) 2018
ref24
ref20
ref22
ref21
Tao (ref28) 2008
ref27
ref29
ref7
ref9
ref4
ref3
ref5
Mirza (ref23) 2014
Chang (ref26) 2016; 56
References_xml – ident: ref16
  doi: 10.1016/j.measurement.2019.107117
– ident: ref1
  doi: 10.1038/s41598-018-37359-z
– ident: ref10
  doi: 10.1109/tnsre.2021.3125023
– volume: 56
  start-page: 1131
  issue: 10
  year: 2016
  ident: ref26
  article-title: An ERP study of emotional sounds in different languages and nonverbal emotions
  publication-title: J. Tsinghua Univ. (Natural Sci. Ed.)
– ident: ref13
  doi: 10.1155/2021/2520394
– ident: ref15
  doi: 10.1109/msp.2007.914731
– ident: ref12
  doi: 10.1109/EMBC.2018.8512865
– ident: ref24
  doi: 10.5772/intechopen.94574
– ident: ref3
  doi: 10.1109/ICHMS53169.2021.9582457
– year: 2018
  ident: ref8
  article-title: EEG-GAN: Generative adversarial networks for electroencephalographic (EEG) brain signals
– ident: ref9
  doi: 10.1109/IJCNN48605.2020.9206942
– ident: ref17
  doi: 10.4103/0972-6748.57865
– ident: ref14
  doi: 10.1109/taffc.2022.3170369
– ident: ref20
  doi: 10.1109/tit.2006.871582
– ident: ref19
  doi: 10.1016/j.tics.2005.11.009
– volume: 2
  start-page: 66
  issue: 4
  year: 2014
  ident: ref25
  article-title: Emotion detection from EEG signals with continuous wavelet analyzing
  publication-title: Amer. J. Comput. Res. Repository
– volume-title: Proc. Blizzard Challenge Workshop
  year: 2008
  ident: ref28
  article-title: Design of speech corpus for mandarin text to speech
– ident: ref29
  doi: 10.1109/access.2018.2813358
– ident: ref18
  doi: 10.1109/taffc.2020.3025777
– ident: ref4
  doi: 10.1088/1741-2552/ace73f
– ident: ref2
  doi: 10.3389/fnhum.2020.557534
– ident: ref22
  doi: 10.1109/acssc.2009.5469828
– year: 2014
  ident: ref23
  article-title: Conditional generative adversarial nets
– ident: ref7
  doi: 10.5555/2969033.2969125
– ident: ref21
  doi: 10.1109/MSP.2007.914731
– ident: ref11
  doi: 10.1007/978-3-030-22796-8_16
– ident: ref27
  doi: 10.1016/j.ijpsycho.2013.06.025
– ident: ref5
  doi: 10.3389/fnins.2021.642251
– volume-title: Introduction to EEG-And Speech-Based Emotion Recognition
  year: 2016
  ident: ref6
SSID ssj0000816896
Score 2.4352133
Snippet Currently, emotional features in speech emotion recognition are typically extracted from the speeches, However, recognition accuracy can be influenced by...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2025
SubjectTerms Brain modeling
Compressed sensing
Convolutional neural networks
cross-subject
EEG
EEG emotion recognition
EEG generation
Electroencephalography
Emotion recognition
Emotions
Event-related potentials
Feature extraction
Generative adversarial networks
Semantics
Speech
Speech recognition
Title Generative Listener EEG for Speech Emotion Recognition Using Generative Adversarial Networks With Compressed Sensing
URI https://ieeexplore.ieee.org/document/10416653
https://www.ncbi.nlm.nih.gov/pubmed/38289847
https://www.proquest.com/docview/3033619319
https://www.proquest.com/docview/2920571276
Volume 28
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4VDlUvBVrapjzkSj1VyjaJnYePgBa2qOyhFJVbZDuzULXKIjZ76a9nxk5WKyRQb47iJI5mbM_neXwAn62ZJbYwTSxNiQRQZBrrnIArNjZrEp1gUXKi8MW0mFyp8-v8uk9W97kwiOiDz3DETe_Lb-ZuyUdlNMPJfChyuQEbhNxCstbqQMUzSHg-rowaMc1E1Xsx00R_PT-efCM0mKmRlIRBcmaIkYw2KiZWWduSPMfK0-am33ZOt2A6DDhEm_wZLTs7cv8e1XL87z_ahte9ASqOgsbswAts38DLi97F_ha6UIiaV0HxnXWArsR4fCbIuhWXd4juVowD94_4MUQfUdvHHoi1hz3V88KwgotpCDZfiF-_u1vBa5CvWd6IS46fb2924ep0_PNkEvfUDLGTedLFZmYz7aqc0w3J5kuM0q5JrbJOItd0c553s1SK0Mgsc8ZImZqyco3mAu8o5TvYbOctfgAxU1Vji7RC6UqFaWbyQjtrTGorJNivI0gG6dSur1vO9Bl_a49fEl2zbGuWbd3LNoIvq0fuQtGO5zrvslzWOgaRRLA_6EDdz-tFTRu-JMhJ61YEn1a3aUaym8W0OF8uaub_yss0K4sI3gfdWb18ULmPT3x0D17x2EJo0D5sdvdLPCCrp7OHXtsfAB1H-i8
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB5BkYBLebUQKGAkTkjZxrGdxEdAW7Zldw-0Fb1FtuNtEShbdbMXfj0zdrJaIRVxcxQncTRjez7P4wN4b80is4VpUmFKjwBF8FQrBK6-sXmT6cwXJSUKz-bF5FyeXKiLPlk95MJ470PwmR9RM_jym6Vb01EZznA0Hwol7sI93PgVj-lamyOVwCERGLlybKQ4F2Xvx-SZPjz5NDlGPJjLkRCIQhRxxAjCGxVRq2xtSoFl5XaDM2w8R49gPgw5xpv8HK07O3K__6rm-N__9Bh2exOUfYw68wTu-PYp3J_1TvZn0MVS1LQOsilpAV6x8fgLQ_uWnV57767YOLL_sG9D_BG2Q_QB23o4kD2vDKk4m8dw8xX7_qO7YrQKharlDTulCPr2cg_Oj8ZnnydpT86QOqGyLjULm2tXKUo4RKsvM1K7hltpnfBU1c0F5s1SSsQji9wZIwQ3ZeUaTSXevRD7sNMuW_8C2EJWjS145YUrpee5UYV21hhuK4_AXyeQDdKpXV-5nAg0ftUBwWS6JtnWJNu6l20CHzaPXMeyHf_qvEdy2eoYRZLAwaADdT-zVzVu-QJBJ65cCbzb3MY5SY4W0_rlelUTA5gqeV4WCTyPurN5-aByL2_56Ft4MDmbTevp8fzrK3hI44yBQgew092s_Wu0gTr7Jmj-H1Dw_Xg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Generative+Listener+EEG+for+Speech+Emotion+Recognition+Using+Generative+Adversarial+Networks+With+Compressed+Sensing&rft.jtitle=IEEE+journal+of+biomedical+and+health+informatics&rft.au=Chang%2C+Jiang&rft.au=Zhang%2C+Zhixin&rft.au=Wang%2C+Zelin&rft.au=Li%2C+Jiacheng&rft.date=2024-04-01&rft.issn=2168-2194&rft.eissn=2168-2208&rft.volume=28&rft.issue=4&rft.spage=2025&rft.epage=2036&rft_id=info:doi/10.1109%2FJBHI.2024.3360151&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_JBHI_2024_3360151
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2168-2194&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2168-2194&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2168-2194&client=summon