Neural decoding of attentional selection in multi-speaker environments without access to clean sources

Objective. People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowi...

Full description

Saved in:
Bibliographic Details
Published inJournal of neural engineering Vol. 14; no. 5; p. 56001
Main Authors O'Sullivan, James, Chen, Zhuo, Herrero, Jose, McKhann, Guy M, Sheth, Sameer A, Mehta, Ashesh D, Mesgarani, Nima
Format Journal Article
LanguageEnglish
Published England IOP Publishing 01.10.2017
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Objective. People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. Approach. We present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener's neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker's voice to assist the listener. Main results. Using invasive electrophysiology recordings, we identified the regions of the auditory cortex that contribute to AAD. Given appropriate electrode locations, our system is able to decode the attention of subjects and amplify the attended speaker using only the mixed audio. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Significance. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearable devices for the hearing impaired.
AbstractList Objective. People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. Approach. We present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener's neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker's voice to assist the listener. Main results. Using invasive electrophysiology recordings, we identified the regions of the auditory cortex that contribute to AAD. Given appropriate electrode locations, our system is able to decode the attention of subjects and amplify the attended speaker using only the mixed audio. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Significance. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearable devices for the hearing impaired.
People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD.OBJECTIVEPeople who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD.We present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener's neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker's voice to assist the listener.APPROACHWe present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener's neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker's voice to assist the listener.Using invasive electrophysiology recordings, we identified the regions of the auditory cortex that contribute to AAD. Given appropriate electrode locations, our system is able to decode the attention of subjects and amplify the attended speaker using only the mixed audio. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures.MAIN RESULTSUsing invasive electrophysiology recordings, we identified the regions of the auditory cortex that contribute to AAD. Given appropriate electrode locations, our system is able to decode the attention of subjects and amplify the attended speaker using only the mixed audio. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures.Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearable devices for the hearing impaired.SIGNIFICANCEOur novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearable devices for the hearing impaired.
People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. We present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener's neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker's voice to assist the listener. Using invasive electrophysiology recordings, we identified the regions of the auditory cortex that contribute to AAD. Given appropriate electrode locations, our system is able to decode the attention of subjects and amplify the attended speaker using only the mixed audio. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearable devices for the hearing impaired.
Author Herrero, Jose
Chen, Zhuo
McKhann, Guy M
Mesgarani, Nima
Sheth, Sameer A
Mehta, Ashesh D
O'Sullivan, James
AuthorAffiliation 4 Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, NY 11030, United States of America
3 Department of Neurological Surgery, The Neurological Institute, 710 West 168 Street, New York, NY 10032, United States of America
1 Department of Electrical Engineering, Columbia University, New York, NY, United States of America
2 Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
AuthorAffiliation_xml – name: 3 Department of Neurological Surgery, The Neurological Institute, 710 West 168 Street, New York, NY 10032, United States of America
– name: 1 Department of Electrical Engineering, Columbia University, New York, NY, United States of America
– name: 2 Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States of America
– name: 4 Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, NY 11030, United States of America
Author_xml – sequence: 1
  givenname: James
  orcidid: 0000-0002-3501-9647
  surname: O'Sullivan
  fullname: O'Sullivan, James
  email: jo2472@columbia.edu
  organization: Columbia University Mortimer B Zuckerman Mind Brain Behavior Institute, New York, NY, United States of America
– sequence: 2
  givenname: Zhuo
  surname: Chen
  fullname: Chen, Zhuo
  email: zc2204@columbia.edu
  organization: Columbia University Department of Electrical Engineering, New York, NY, United States of America
– sequence: 3
  givenname: Jose
  surname: Herrero
  fullname: Herrero, Jose
  email: jherreroru@northwell.edu
  organization: Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research Department of Neurosurgery, Manhasset, NY 11030, United States of America
– sequence: 4
  givenname: Guy M
  surname: McKhann
  fullname: McKhann, Guy M
  email: gm317@cumc.columbia.edu
  organization: The Neurological Institute Department of Neurological Surgery, 710 West 168 Street, New York, NY 10032, United States of America
– sequence: 5
  givenname: Sameer A
  surname: Sheth
  fullname: Sheth, Sameer A
  email: ss4451@cumc.columbia.edu
  organization: The Neurological Institute Department of Neurological Surgery, 710 West 168 Street, New York, NY 10032, United States of America
– sequence: 6
  givenname: Ashesh D
  surname: Mehta
  fullname: Mehta, Ashesh D
  email: amehta@nshs.edu
  organization: Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research Department of Neurosurgery, Manhasset, NY 11030, United States of America
– sequence: 7
  givenname: Nima
  surname: Mesgarani
  fullname: Mesgarani, Nima
  email: nima@ee.columbia.edu
  organization: Columbia University Mortimer B Zuckerman Mind Brain Behavior Institute, New York, NY, United States of America
BackLink https://www.ncbi.nlm.nih.gov/pubmed/28776506$$D View this record in MEDLINE/PubMed
BookMark eNp9UsFu1TAQtFARbR_cOSHf4ECo7SR2ckGqqhaQqvYC55XjrFs_EjvEThF_j6PXPkFVOHm1npldz_iYHPjgkZDXnH3grGlOuKp4IepanGitdFc9I0f71sG-luyQHMe4ZazkqmUvyKFolJI1k0fEXuEy64H2aELv_A0NluqU0CcXfO5HHNCsNXWejsuQXBEn1N9xpujv3Bz8mLGR_nTpNiyJamMwRpoCNQNqT2NY5tx5SZ5bPUR8dX9uyLeL869nn4vL609fzk4vC1PJOhXK6tb0LROyt6XtK2ZQouBKWC6UNKYTWqFs2k50qmpLVgnGLeN9qbQWfafLDfm4052WbsTe5N3y62Ca3ajnXxC0g79vvLuFm3AHdcPqsmFZ4N29wBx-LBgTjC4aHAbtMSwReCukbKq2khn65s9Z-yEP5maA3AHMHGKc0YJxSa9m5tFuAM5gTRHWmGCNDHYpZiJ7RHzQ_g_l7Y7iwgTbbHoOL8LWI_AKamD5EzAOU7Z1Q94_gfyn8G-pKL4E
CODEN JNEIEZ
CitedBy_id crossref_primary_10_3389_fnins_2021_643705
crossref_primary_10_1007_s10772_021_09817_z
crossref_primary_10_1523_JNEUROSCI_0583_21_2021
crossref_primary_10_1177_2331216518816600
crossref_primary_10_1016_j_ymeth_2022_04_009
crossref_primary_10_3389_fnins_2022_904003
crossref_primary_10_1002_advs_202401379
crossref_primary_10_3390_s21020531
crossref_primary_10_1111_ejn_14425
crossref_primary_10_1088_1741_2552_ab0cf1
crossref_primary_10_7554_eLife_56481
crossref_primary_10_1080_2326263X_2019_1697163
crossref_primary_10_1088_1741_2552_ac16b4
crossref_primary_10_1109_TNSRE_2019_2952724
crossref_primary_10_1109_TASLP_2022_3169629
crossref_primary_10_3389_fnhum_2023_1122480
crossref_primary_10_3390_brainsci9030070
crossref_primary_10_3389_fnins_2018_00531
crossref_primary_10_1016_j_bspc_2021_102966
crossref_primary_10_1088_1741_2552_aae0a6
crossref_primary_10_1523_JNEUROSCI_1936_19_2020
crossref_primary_10_3389_fnins_2019_00153
crossref_primary_10_1038_s41598_024_58886_y
crossref_primary_10_1016_j_neuron_2019_09_007
crossref_primary_10_1109_JBHI_2020_3037366
crossref_primary_10_1007_s10162_022_00846_2
crossref_primary_10_1088_1741_2552_ac4de6
crossref_primary_10_1038_s41598_023_49990_6
crossref_primary_10_3389_fnins_2022_872600
crossref_primary_10_1088_1741_2552_ad867c
crossref_primary_10_1109_TNSRE_2019_2903404
crossref_primary_10_1038_s41598_021_94876_0
crossref_primary_10_1038_s41598_018_37359_z
crossref_primary_10_1111_ejn_15616
crossref_primary_10_1109_TASLP_2024_3463498
crossref_primary_10_1088_1741_2552_ab7c8d
crossref_primary_10_1088_1741_2552_ab4340
crossref_primary_10_1088_1741_2552_aba6f8
crossref_primary_10_1109_MSP_2021_3075932
crossref_primary_10_1121_1_5045322
crossref_primary_10_1088_1741_2552_abfeba
crossref_primary_10_7554_eLife_92079_3
crossref_primary_10_1162_imag_a_00148
crossref_primary_10_1088_1741_2552_aae6b9
crossref_primary_10_1016_j_neuroimage_2019_116211
crossref_primary_10_1121_1_5129055
crossref_primary_10_1093_cercor_bhad325
crossref_primary_10_1016_j_apacoust_2022_108822
crossref_primary_10_1016_j_neuroimage_2020_117282
crossref_primary_10_3390_e20050386
crossref_primary_10_1523_ENEURO_0057_19_2019
crossref_primary_10_3390_s23146484
crossref_primary_10_1097_AUD_0000000000000879
crossref_primary_10_1016_j_cub_2019_07_075
crossref_primary_10_1016_j_apacoust_2020_107826
crossref_primary_10_1126_sciadv_aav6134
crossref_primary_10_1109_TASLP_2020_2969779
crossref_primary_10_7554_eLife_92079
crossref_primary_10_1044_2018_JSLHR_S_ASTM_18_0244
crossref_primary_10_3389_fnins_2018_00262
crossref_primary_10_1073_pnas_1721226115
crossref_primary_10_1088_1741_2552_ab92b2
crossref_primary_10_1177_2331216518788219
Cites_doi 10.3109/17483107.2014.905642
10.1109/EMBC.2016.7590644
10.1109/EMBC.2016.7592020
10.1109/MLSP.2016.7738810
10.1109/JPROC.2008.922569
10.3233/RNN-2010-0535
10.1016/B978-012369391-4/50065-5
10.1121/1.3097493
10.1093/cercor/bht355
10.1152/jn.91128.2008
10.1109/TASL.2009.2016395
10.1016/j.tins.2016.05.001
10.1201/b14529
10.1121/1.1408946
10.1121/1.408434
10.1016/j.brainres.2007.10.053
10.1109/ICASSP.2016.7471771
10.1038/nature11020
10.1088/1741-2560/12/4/046007
10.1016/j.neuroimage.2009.08.041
10.1109/TASLP.2014.2304637
10.1080/2326263X.2015.1063363
10.1097/01.HJ.0000508368.12042.08
10.1088/1741-2560/11/4/046015
10.1109/GlobalSIP.2014.7032183
10.1109/ICASSP.2001.941023
10.1109/TAU.1969.1162058
10.1109/ISVLSI.2016.111
10.1109/tnsre.2016.2571900
10.1109/EMBC.2015.7319696
10.1044/1092-4388(2003/071)
10.1146/annurev.neuro.29.051605.112824
10.1007/978-3-319-22482-4_11
10.1038/nature11911
10.1152/jn.00297.2011
10.1109/ICASSP.2014.6853860
10.1111/j.1528-1157.1998.tb01151.x
10.1111/j.0013-9580.2004.26104.x
10.1109/ICASSP.2016.7471764
10.1109/tbme.2016.2587382
10.1111/j.1460-9568.2012.08060.x
10.1109/53.665
10.1073/pnas.1205381109
10.3115/1075527.1075614
10.1109/TBME.2008.2009768
10.1016/j.jneumeth.2011.04.037
10.1109/TSA.2003.811543
10.1016/j.neuroimage.2008.02.032
10.1163/000579511X605759
10.1109/TASL.2010.2045180
10.1007/978-3-540-49127-9_5
10.1016/j.neuron.2012.12.037
10.1152/jn.01026.2012
10.1109/EUSIPCO.2016.7760204
10.1016/j.neuroimage.2015.09.048
10.1088/1741-2560/13/6/066004
ContentType Journal Article
Copyright 2017 IOP Publishing Ltd
Copyright_xml – notice: 2017 IOP Publishing Ltd
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
5PM
DOI 10.1088/1741-2552/aa7ab4
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
PubMed Central (Full Participant titles)
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
MEDLINE
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Anatomy & Physiology
DocumentTitleAlternate Neural decoding of attentional selection in multi-speaker environments without access to clean sources
EISSN 1741-2552
EndPage 056001
ExternalDocumentID PMC5805380
28776506
10_1088_1741_2552_aa7ab4
jneaa7ab4
Genre Journal Article
Research Support, N.I.H., Extramural
GrantInformation_xml – fundername: National Institute on Deafness and Other Communication Disorders
  grantid: DC014279
  funderid: https://doi.org/10.13039/100000055
– fundername: NIMH NIH HHS
  grantid: R21 MH114166
– fundername: NIDCD NIH HHS
  grantid: R01 DC014279
GroupedDBID ---
02O
1JI
1WK
4.4
53G
5B3
5GY
5VS
5ZH
7.M
7.Q
AAGCD
AAJIO
AAJKP
AALHV
AATNI
ABHWH
ABJNI
ABQJV
ABVAM
ACAFW
ACGFS
ACHIP
AEFHF
AENEX
AFYNE
AHSEE
AKPSB
ALMA_UNASSIGNED_HOLDINGS
AOAED
ASPBG
ATQHT
AVWKF
AZFZN
BBWZM
CEBXE
CJUJL
CRLBU
CS3
DU5
EBS
EDWGO
EJD
EMSAF
EPQRW
EQZZN
F5P
FEDTE
HAK
HVGLF
IHE
IJHAN
IOP
IZVLO
JCGBZ
KOT
LAP
M45
N5L
N9A
NT-
NT.
P2P
PJBAE
Q02
RIN
RNS
RO9
ROL
RPA
S3P
SY9
W28
XPP
AAYXX
ADEQX
CITATION
ACARI
AERVB
AGQPQ
ARNYC
CGR
CUY
CVF
ECM
EIF
NPM
7X8
5PM
AEINN
ID FETCH-LOGICAL-c465t-7fa9cd9026df3fd40ce6e2172f1276ccb2a7e689b2b749304201f01d37aa2dba3
IEDL.DBID IOP
ISSN 1741-2560
1741-2552
IngestDate Thu Aug 21 14:34:29 EDT 2025
Fri Jul 11 11:34:44 EDT 2025
Mon Jul 21 05:55:18 EDT 2025
Thu Apr 24 23:09:58 EDT 2025
Tue Jul 01 01:58:37 EDT 2025
Wed Aug 21 03:33:55 EDT 2024
Fri Jan 08 09:41:22 EST 2021
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 5
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c465t-7fa9cd9026df3fd40ce6e2172f1276ccb2a7e689b2b749304201f01d37aa2dba3
Notes JNE-101802.R1
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
James O’Sullivan https://orcid.org/0000-0002-3501-9647
ORCID iDs
ORCID 0000-0002-3501-9647
OpenAccessLink http://doi.org/10.1088/1741-2552/aa7ab4
PMID 28776506
PQID 1926684946
PQPubID 23479
PageCount 14
ParticipantIDs proquest_miscellaneous_1926684946
crossref_primary_10_1088_1741_2552_aa7ab4
crossref_citationtrail_10_1088_1741_2552_aa7ab4
pubmed_primary_28776506
iop_journals_10_1088_1741_2552_aa7ab4
pubmedcentral_primary_oai_pubmedcentral_nih_gov_5805380
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2017-10-01
PublicationDateYYYYMMDD 2017-10-01
PublicationDate_xml – month: 10
  year: 2017
  text: 2017-10-01
  day: 01
PublicationDecade 2010
PublicationPlace England
PublicationPlace_xml – name: England
PublicationTitle Journal of neural engineering
PublicationTitleAbbrev JNE
PublicationTitleAlternate J. Neural Eng
PublicationYear 2017
Publisher IOP Publishing
Publisher_xml – name: IOP Publishing
References 44
46
48
49
Ovtcharov K (54) 2015; 2
Peterson N R (51) 2010; 28
Synchron (53)
Lacey G (56) 2016
Loizou P C (47) 2013
50
52
10
11
55
12
13
57
58
15
59
16
17
18
Macmillan N (43) 2005
19
Das N (26) 2016; 13
1
2
3
4
5
6
7
8
9
60
61
62
63
20
Fiedler L (28) 1988; 14
64
21
Mirkovic B (22) 2015; 12
24
25
Bleichner M G (23) 2016; 13
27
29
Lotia P (42) 2012; 3
30
31
32
33
34
Mesgarani N (45) 2016
35
36
37
38
39
Horton C (14) 2014; 11
40
41
References_xml – ident: 6
  doi: 10.3109/17483107.2014.905642
– ident: 31
  doi: 10.1109/EMBC.2016.7590644
– ident: 25
  doi: 10.1109/EMBC.2016.7592020
– volume: 14
  issn: 1741-2552
  year: 1988
  ident: 28
  publication-title: J. Neural Eng.
– ident: 24
  doi: 10.1109/MLSP.2016.7738810
– year: 2016
  ident: 56
– ident: 53
– volume: 3
  start-page: 650
  year: 2012
  ident: 42
  publication-title: Int. J. Adv. Eng. Technol.
– ident: 49
  doi: 10.1109/JPROC.2008.922569
– volume: 28
  start-page: 237
  issn: 0922-6028
  year: 2010
  ident: 51
  publication-title: Restor. Neurol. Neurosci.
  doi: 10.3233/RNN-2010-0535
– ident: 2
  doi: 10.1016/B978-012369391-4/50065-5
– ident: 40
  doi: 10.1121/1.3097493
– ident: 13
  doi: 10.1093/cercor/bht355
– ident: 41
  doi: 10.1152/jn.91128.2008
– ident: 30
  doi: 10.1109/TASL.2009.2016395
– ident: 5
  doi: 10.1016/j.tins.2016.05.001
– year: 2013
  ident: 47
  publication-title: Speech Enhancement: Theory and Practice
  doi: 10.1201/b14529
– ident: 63
  doi: 10.1121/1.1408946
– ident: 1
  doi: 10.1121/1.408434
– ident: 62
  doi: 10.1016/j.brainres.2007.10.053
– ident: 20
  doi: 10.1109/ICASSP.2016.7471771
– ident: 12
  doi: 10.1038/nature11020
– volume: 12
  issn: 1741-2552
  year: 2015
  ident: 22
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2560/12/4/046007
– ident: 60
  doi: 10.1016/j.neuroimage.2009.08.041
– ident: 57
  doi: 10.1109/TASLP.2014.2304637
– ident: 15
  doi: 10.1080/2326263X.2015.1063363
– ident: 4
  doi: 10.1097/01.HJ.0000508368.12042.08
– volume: 11
  issn: 1741-2552
  year: 2014
  ident: 14
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2560/11/4/046015
– ident: 34
  doi: 10.1109/GlobalSIP.2014.7032183
– ident: 38
  doi: 10.1109/ICASSP.2001.941023
– ident: 44
  doi: 10.1109/TAU.1969.1162058
– volume: 13
  issn: 1741-2552
  year: 2016
  ident: 26
  publication-title: J. Neural Eng.
– year: 2005
  ident: 43
  publication-title: Detection Theory: a User’s Guide
– ident: 55
  doi: 10.1109/ISVLSI.2016.111
– ident: 16
  doi: 10.1109/tnsre.2016.2571900
– ident: 18
  doi: 10.1109/EMBC.2015.7319696
– ident: 3
  doi: 10.1044/1092-4388(2003/071)
– ident: 52
  doi: 10.1146/annurev.neuro.29.051605.112824
– ident: 36
  doi: 10.1007/978-3-319-22482-4_11
– ident: 35
  doi: 10.1038/nature11911
– ident: 7
  doi: 10.1152/jn.00297.2011
– ident: 33
  doi: 10.1109/ICASSP.2014.6853860
– ident: 48
  doi: 10.1111/j.1528-1157.1998.tb01151.x
– ident: 50
  doi: 10.1111/j.0013-9580.2004.26104.x
– ident: 19
  doi: 10.1109/ICASSP.2016.7471764
– ident: 21
  doi: 10.1109/tbme.2016.2587382
– volume: 2
  year: 2015
  ident: 54
  publication-title: Microsoft Research Whitepaper
– ident: 10
  doi: 10.1111/j.1460-9568.2012.08060.x
– ident: 29
  doi: 10.1109/53.665
– ident: 8
  doi: 10.1073/pnas.1205381109
– ident: 37
  doi: 10.3115/1075527.1075614
– ident: 59
  doi: 10.1109/TBME.2008.2009768
– ident: 64
  doi: 10.1016/j.jneumeth.2011.04.037
– ident: 32
  doi: 10.1109/TSA.2003.811543
– ident: 61
  doi: 10.1016/j.neuroimage.2008.02.032
– ident: 58
  doi: 10.1163/000579511X605759
– ident: 46
  doi: 10.1109/TASL.2010.2045180
– ident: 39
  doi: 10.1007/978-3-540-49127-9_5
– ident: 11
  doi: 10.1016/j.neuron.2012.12.037
– ident: 9
  doi: 10.1152/jn.01026.2012
– ident: 17
  doi: 10.1109/EUSIPCO.2016.7760204
– ident: 27
  doi: 10.1016/j.neuroimage.2015.09.048
– volume: 13
  issn: 1741-2552
  year: 2016
  ident: 23
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2560/13/6/066004
– year: 2016
  ident: 45
  publication-title: Provisional Patent filed June 2016
SSID ssj0031790
Score 2.4812598
Snippet Objective. People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can...
People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress...
SourceID pubmedcentral
proquest
pubmed
crossref
iop
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 56001
SubjectTerms Acoustic Stimulation - methods
attention
Auditory Cortex - physiology
Auditory Perception - physiology
DNN
ECoG
Electrodes, Implanted - trends
Electroencephalography - methods
Female
hearing aid
Hearing Aids - trends
Humans
LSTM
Male
Nerve Net - physiology
sEEG
Speech Perception - physiology
stimulus-reconstruction
Title Neural decoding of attentional selection in multi-speaker environments without access to clean sources
URI https://iopscience.iop.org/article/10.1088/1741-2552/aa7ab4
https://www.ncbi.nlm.nih.gov/pubmed/28776506
https://www.proquest.com/docview/1926684946
https://pubmed.ncbi.nlm.nih.gov/PMC5805380
Volume 14
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bSxwxFD6ofelLbWsva2tJoS30YXbnkkky-CSlokIvDxV8KIRMLmjVzOLOPuiv70kyu7giUvoWyJlkcv9Ocs53AD7UFeMKj-lMG5zBlJkia7guM90wHR0pcxocnL99ZwfH9OikPlmD3aUvTDcdtv4xJhNRcOrCwSBOTBBDFxki4XKiFFctXYdHlWAshC84_PFzsQ1XgXoqeUMGaZYPb5T3lbByJq1jvffBzbtWk7eOof1N-L1oQLI-OR_P-3asb-5wO_5nC5_CkwGekr0k-gzWrH8OW3seVfPLa_KJRIPReBO_BS4Qe6CwQQ02nICkcySwdQ427GQWQ-xgmpx5Eg0Xs9nUqnN7RW6715FwFdzNe6Ji7EbSdwSrVp6kd4XZCzje__rry0E2hG3INGV1n3GnGm0aVO6Mq5yhubbMhjhYrig507otFbdMNG3ZctqE65S8cHlhKq5UaVpVvYQN33n7GggVNeOlci3qqYEqX-WF5bg7a0Vdwep6BJPFwEk9cJqH0BoXMr6tCyFDV8rQlTJ15Qg-L7-YJj6PB2Q_4gjJYVHPHpAjK3J_vEWNStYyj3BSTo0bwfvFhJK4fsOjjPK2m2OhDUIkQRvKRvAqTbDlj6E2yxFBYw5fmXpLgcANvprjz04jR3gtcHcV-fY_NuENPC4DXolWim9ho7-a2x1EW337Lq6qv4GxJBs
linkProvider IOP Publishing
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3JbhQxEC2RICEubIEwrEYCJA4904vb7j5GwChhCTkQKTfj9iJCEvco03OAr6e8zCgTRRESt5Zcttv7K7vqFcDrumJc4jGdKY0zmDJdZC1XZaZapoIjZU69g_PXfbZ7SD8d1Ucpzmnwhelnaesf42ckCo5dmAzimgli6CJDJFxOpOSyo5OZthtwE2utPHn-3reD5VZcefqp6BHpc7A8vVNeVcraubSBdV8FOS9bTl44iqZ34ceyEdEC5WS8GLqx-nOJ3_E_WnkP7iSYSnai-H24YdwD2NpxqKKf_SZvSTAcDTfyW2A9wQcKa9Rk_UlIeks8a2eyZSfzEGoHv8mxI8GAMZvPjDwx5-Simx3xV8L9YiAyxHAkQ0-waulIfF-YP4TD6cfv73ezFL4hU5TVQ8atbJVuUcnTtrKa5sow4-Nh2aLkTKmulNywpu3KjtPWX6vkhc0LXXEpS93J6hFsut6Zx0BoUzNeStuhvuop82VeGI67tJLUFqyuRzBZDp5Qidvch9g4FeGNvWmE707hu1PE7hzBu1WOWeT1uEb2DY6SSIt7fo0cWZP75QxqVqIWeYCVAkdwBK-Wk0rgOvaPM9KZfoGFtgiVGtpSNoLtOMlWP4ZaLUckjSl8bfqtBDxH-HqKO_4ZuMLrBnfZJn_yj014CbcOPkzFl739z0_hdukhTDBcfAabw_nCPEcANnQvwiL7C5hxKX8
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Neural+decoding+of+attentional+selection+in+multi-speaker+environments+without+access+to+clean+sources&rft.jtitle=Journal+of+neural+engineering&rft.au=O%27Sullivan%2C+James&rft.au=Chen%2C+Zhuo&rft.au=Herrero%2C+Jose&rft.au=McKhann%2C+Guy+M&rft.date=2017-10-01&rft.eissn=1741-2552&rft.volume=14&rft.issue=5&rft.spage=056001&rft_id=info:doi/10.1088%2F1741-2552%2Faa7ab4&rft_id=info%3Apmid%2F28776506&rft.externalDocID=28776506
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1741-2560&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1741-2560&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1741-2560&client=summon