Employing deep learning model to evaluate speech information in acoustic simulations of Cochlear implants

Acoustic vocoders play a key role in simulating the speech information available to cochlear implant (CI) users. Traditionally, the intelligibility of vocoder CI simulations is assessed through speech recognition experiments with normally-hearing subjects, a process that can be time-consuming, costl...

Full description

Saved in:
Bibliographic Details
Published inScientific reports Vol. 14; no. 1; pp. 24056 - 17
Main Authors Sinha, Rahul, Azadpour, Mahan
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 14.10.2024
Nature Publishing Group
Nature Portfolio
Subjects
Online AccessGet full text
ISSN2045-2322
2045-2322
DOI10.1038/s41598-024-73173-6

Cover

Loading…
Abstract Acoustic vocoders play a key role in simulating the speech information available to cochlear implant (CI) users. Traditionally, the intelligibility of vocoder CI simulations is assessed through speech recognition experiments with normally-hearing subjects, a process that can be time-consuming, costly, and subject to individual variability. As an alternative approach, we utilized an advanced deep learning speech recognition model to investigate the intelligibility of CI simulations. We evaluated model’s performance on vocoder-processed words and sentences with varying vocoder parameters. The number of vocoder bands, frequency range, and envelope dynamic range were adjusted to simulate sound processing settings in CI devices. Additionally, we manipulated the low-cutoff frequency and intensity quantization of vocoder envelopes to simulate psychophysical temporal and intensity resolutions in CI patients. The results were evaluated within the context of the audio analysis performed in the model. Interestingly, the deep learning model, despite not being originally designed to mimic human speech processing, exhibited a human-like response to alterations in vocoder parameters, resembling existing human subject results. This approach offers significant time and cost savings compared to testing human subjects, and eliminates learning and fatigue effects during testing. Our findings demonstrate the potential of speech recognition models in facilitating auditory research.
AbstractList Acoustic vocoders play a key role in simulating the speech information available to cochlear implant (CI) users. Traditionally, the intelligibility of vocoder CI simulations is assessed through speech recognition experiments with normally-hearing subjects, a process that can be time-consuming, costly, and subject to individual variability. As an alternative approach, we utilized an advanced deep learning speech recognition model to investigate the intelligibility of CI simulations. We evaluated model’s performance on vocoder-processed words and sentences with varying vocoder parameters. The number of vocoder bands, frequency range, and envelope dynamic range were adjusted to simulate sound processing settings in CI devices. Additionally, we manipulated the low-cutoff frequency and intensity quantization of vocoder envelopes to simulate psychophysical temporal and intensity resolutions in CI patients. The results were evaluated within the context of the audio analysis performed in the model. Interestingly, the deep learning model, despite not being originally designed to mimic human speech processing, exhibited a human-like response to alterations in vocoder parameters, resembling existing human subject results. This approach offers significant time and cost savings compared to testing human subjects, and eliminates learning and fatigue effects during testing. Our findings demonstrate the potential of speech recognition models in facilitating auditory research.
Acoustic vocoders play a key role in simulating the speech information available to cochlear implant (CI) users. Traditionally, the intelligibility of vocoder CI simulations is assessed through speech recognition experiments with normally-hearing subjects, a process that can be time-consuming, costly, and subject to individual variability. As an alternative approach, we utilized an advanced deep learning speech recognition model to investigate the intelligibility of CI simulations. We evaluated model's performance on vocoder-processed words and sentences with varying vocoder parameters. The number of vocoder bands, frequency range, and envelope dynamic range were adjusted to simulate sound processing settings in CI devices. Additionally, we manipulated the low-cutoff frequency and intensity quantization of vocoder envelopes to simulate psychophysical temporal and intensity resolutions in CI patients. The results were evaluated within the context of the audio analysis performed in the model. Interestingly, the deep learning model, despite not being originally designed to mimic human speech processing, exhibited a human-like response to alterations in vocoder parameters, resembling existing human subject results. This approach offers significant time and cost savings compared to testing human subjects, and eliminates learning and fatigue effects during testing. Our findings demonstrate the potential of speech recognition models in facilitating auditory research.Acoustic vocoders play a key role in simulating the speech information available to cochlear implant (CI) users. Traditionally, the intelligibility of vocoder CI simulations is assessed through speech recognition experiments with normally-hearing subjects, a process that can be time-consuming, costly, and subject to individual variability. As an alternative approach, we utilized an advanced deep learning speech recognition model to investigate the intelligibility of CI simulations. We evaluated model's performance on vocoder-processed words and sentences with varying vocoder parameters. The number of vocoder bands, frequency range, and envelope dynamic range were adjusted to simulate sound processing settings in CI devices. Additionally, we manipulated the low-cutoff frequency and intensity quantization of vocoder envelopes to simulate psychophysical temporal and intensity resolutions in CI patients. The results were evaluated within the context of the audio analysis performed in the model. Interestingly, the deep learning model, despite not being originally designed to mimic human speech processing, exhibited a human-like response to alterations in vocoder parameters, resembling existing human subject results. This approach offers significant time and cost savings compared to testing human subjects, and eliminates learning and fatigue effects during testing. Our findings demonstrate the potential of speech recognition models in facilitating auditory research.
Abstract Acoustic vocoders play a key role in simulating the speech information available to cochlear implant (CI) users. Traditionally, the intelligibility of vocoder CI simulations is assessed through speech recognition experiments with normally-hearing subjects, a process that can be time-consuming, costly, and subject to individual variability. As an alternative approach, we utilized an advanced deep learning speech recognition model to investigate the intelligibility of CI simulations. We evaluated model’s performance on vocoder-processed words and sentences with varying vocoder parameters. The number of vocoder bands, frequency range, and envelope dynamic range were adjusted to simulate sound processing settings in CI devices. Additionally, we manipulated the low-cutoff frequency and intensity quantization of vocoder envelopes to simulate psychophysical temporal and intensity resolutions in CI patients. The results were evaluated within the context of the audio analysis performed in the model. Interestingly, the deep learning model, despite not being originally designed to mimic human speech processing, exhibited a human-like response to alterations in vocoder parameters, resembling existing human subject results. This approach offers significant time and cost savings compared to testing human subjects, and eliminates learning and fatigue effects during testing. Our findings demonstrate the potential of speech recognition models in facilitating auditory research.
ArticleNumber 24056
Author Azadpour, Mahan
Sinha, Rahul
Author_xml – sequence: 1
  givenname: Rahul
  surname: Sinha
  fullname: Sinha, Rahul
  organization: Department of Otolaryngology, New York University Grossman School of Medicine
– sequence: 2
  givenname: Mahan
  surname: Azadpour
  fullname: Azadpour, Mahan
  email: Mahan.Azadpour@nyulangone.org
  organization: Department of Otolaryngology, New York University Grossman School of Medicine
BackLink https://www.ncbi.nlm.nih.gov/pubmed/39402071$$D View this record in MEDLINE/PubMed
BookMark eNp9kk1v1DAQhi1UREvpH-CALHHhEvBX4viE0KqUSpW4wNlynMmuV44d7KRS_329m7a0HPDBHo-feWc0nrfoJMQACL2n5DMlvP2SBa1VWxEmKsmp5FXzCp0xIuqKccZOntmn6CLnPSmrZkpQ9QadciUII5KeIXc5Tj7eubDFPcCEPZgUDrcx9uDxHDHcGr-YGXCeAOwOuzDENJrZxVBsbGxc8uwszm5c_NGdcRzwJtrdQQy7ksCEOb9DrwfjM1w8nOfo9_fLX5sf1c3Pq-vNt5vKCsXmilvRml6STlKiRGfbrqlbYyhrWmZ7RRoGgnFlhrIxO1gOHIamJ1JCC93Q8XN0ver20ez1lNxo0p2OxumjI6atNqkU7EFbaZTteyrqrhZcQVeW4LIbGDCQNSlaX1etaelG6C2EORn_QvTlS3A7vY23mlIhFZO8KHx6UEjxzwJ51qPLFnxpCZTGaU5p00hZshb04z_oPi4plF4dKVG3ktaF-vC8pKdaHr-0AGwFbIo5JxieEEr0YXT0Ojq6jI4-jo5uShBfg3KBwxbS39z_iboHLRrH3w
Cites_doi 10.1097/MAO.0000000000001409
10.1121/1.3592521
10.1121/1.1786871
10.3766/jaaa.15077
10.1101/2021.04.19.440438
10.1121/1.1886405
10.1126/science.270.5234.303
10.1097/AUD.0000000000000163
10.1016/j.heares.2008.04.012
10.1097/00003446-200002000-00006
10.1121/10.0006446
10.1121/1.429605
10.1177/2331216516646556
10.48550/arXiv.2212.04356
10.1097/01.aud.0000202357.40662.8500003446-200604000-00005[pii]
10.1080/03655230410017562
10.1097/AUD.0b013e31822c2549
10.1371/journal.pone.0244632
10.1007/s10162-022-00834-6
10.1121/1.5009602
10.1121/1.1423926
10.1097/AUD.0b013e3182a768e8
10.1037/0096-3445.134.2.222
10.1121/10.0009411
10.1037/0096-1523.34.2.460
10.1121/1.1317557
10.1121/1.3158835
10.1109/TBME.2022.3167113
10.1080/0954898X.2016.1219412
10.1097/00001756-200209160-00013
10.1121/1.1381538
10.1177/2331216514553783
10.1016/j.heares.2022.108584
10.1121/1.2823453
10.1109/Msp.2014.2371671
10.1007/s10162-018-0658-8
10.1121/1.417949
10.1121/1.419603
10.1109/10.764939
10.1016/j.heares.2011.11.009
ContentType Journal Article
Copyright The Author(s) 2024
2024. The Author(s).
The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
The Author(s) 2024 2024
Copyright_xml – notice: The Author(s) 2024
– notice: 2024. The Author(s).
– notice: The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: The Author(s) 2024 2024
DBID C6C
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7X7
7XB
88A
88E
88I
8FE
8FH
8FI
8FJ
8FK
ABUWG
AEUYN
AFKRA
AZQEC
BBNVY
BENPR
BHPHI
CCPQU
DWQXO
FYUFA
GHDGH
GNUQQ
HCIFZ
K9.
LK8
M0S
M1P
M2P
M7P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
Q9U
7X8
5PM
DOA
DOI 10.1038/s41598-024-73173-6
DatabaseName Springer Nature OA Free Journals
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Biology Database (Alumni Edition)
Medical Database (Alumni Edition)
Science Database (Alumni Edition)
ProQuest SciTech Collection
ProQuest Natural Science Collection
ProQuest Hospital Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest One Sustainability (subscription)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest : Biological Science Collection journals [unlimited simultaneous users]
ProQuest Central
Natural Science Collection
ProQuest One Community College
ProQuest Central
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
SciTech Premium Collection
ProQuest Health & Medical Complete (Alumni)
Biological Sciences
ProQuest Health & Medical Collection
Medical Database
Science Database
ProQuest Biological Science
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest Central Basic
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ : Directory of Open Access Journals [open access]
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest Central Student
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Natural Science Collection
ProQuest Central China
ProQuest Biology Journals (Alumni Edition)
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest One Sustainability
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
Natural Science Collection
ProQuest Central Korea
Health & Medical Research Collection
Biological Science Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest Science Journals (Alumni Edition)
ProQuest Biological Science Collection
ProQuest Central Basic
ProQuest Science Journals
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
Biological Science Database
ProQuest SciTech Collection
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList
Publicly Available Content Database
MEDLINE - Academic
MEDLINE


Database_xml – sequence: 1
  dbid: C6C
  name: Springer Nature OA Free Journals
  url: http://www.springeropen.com/
  sourceTypes: Publisher
– sequence: 2
  dbid: DOA
  name: Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 3
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 4
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 5
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Biology
EISSN 2045-2322
EndPage 17
ExternalDocumentID oai_doaj_org_article_c7a9cdd145b5439ebbbb437bf2e2e750
PMC11479273
39402071
10_1038_s41598_024_73173_6
Genre Journal Article
GrantInformation_xml – fundername: NIDCD NIH HHS
  grantid: R21 DC020305
GroupedDBID 0R~
4.4
53G
5VS
7X7
88E
88I
8FE
8FH
8FI
8FJ
AAFWJ
AAJSJ
AAKDD
AASML
ABDBF
ABUWG
ACGFS
ACUHS
ADBBV
ADRAZ
AENEX
AEUYN
AFKRA
AFPKN
ALIPV
ALMA_UNASSIGNED_HOLDINGS
AOIJS
AZQEC
BAWUL
BBNVY
BCNDV
BENPR
BHPHI
BPHCQ
BVXVI
C6C
CCPQU
DIK
DWQXO
EBD
EBLON
EBS
ESX
FYUFA
GNUQQ
GROUPED_DOAJ
GX1
HCIFZ
HH5
HMCUK
HYE
KQ8
LK8
M1P
M2P
M48
M7P
M~E
NAO
OK1
PHGZM
PHGZT
PIMPY
PQQKQ
PROAC
PSQYO
RNT
RNTTT
RPM
SNYQT
UKHRP
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
PJZUB
PPXIY
PQGLB
3V.
7XB
88A
8FK
AARCD
K9.
PKEHL
PQEST
PQUKI
PRINS
Q9U
7X8
5PM
PUEGO
ID FETCH-LOGICAL-c492t-3c48ad70b71094bc8b658aa12682cd9062e4239af2392cfc3e3ef6d077e8ebfb3
IEDL.DBID M48
ISSN 2045-2322
IngestDate Wed Aug 27 01:29:20 EDT 2025
Thu Aug 21 18:30:59 EDT 2025
Thu Aug 07 14:50:24 EDT 2025
Wed Aug 13 08:43:38 EDT 2025
Mon Jul 21 06:00:00 EDT 2025
Tue Jul 01 03:23:13 EDT 2025
Thu May 22 04:32:05 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
License 2024. The Author(s).
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c492t-3c48ad70b71094bc8b658aa12682cd9062e4239af2392cfc3e3ef6d077e8ebfb3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
OpenAccessLink http://journals.scholarsportal.info/openUrl.xqy?doi=10.1038/s41598-024-73173-6
PMID 39402071
PQID 3116458715
PQPubID 2041939
PageCount 17
ParticipantIDs doaj_primary_oai_doaj_org_article_c7a9cdd145b5439ebbbb437bf2e2e750
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11479273
proquest_miscellaneous_3116677437
proquest_journals_3116458715
pubmed_primary_39402071
crossref_primary_10_1038_s41598_024_73173_6
springer_journals_10_1038_s41598_024_73173_6
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2024-10-14
PublicationDateYYYYMMDD 2024-10-14
PublicationDate_xml – month: 10
  year: 2024
  text: 2024-10-14
  day: 14
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationTitle Scientific reports
PublicationTitleAbbrev Sci Rep
PublicationTitleAlternate Sci Rep
PublicationYear 2024
Publisher Nature Publishing Group UK
Nature Publishing Group
Nature Portfolio
Publisher_xml – name: Nature Publishing Group UK
– name: Nature Publishing Group
– name: Nature Portfolio
References LandsbergerDMSvrakicMRolandJTJrSvirskyMThe Relationship Between Insertion Angles, Default Frequency Allocations, and Spiral Ganglion Place Pitch in Cochlear ImplantsEar Hear.201536e20721310.1097/AUD.0000000000000163258606244549170
JethanamestDAzadpourMZemanAMSagiESvirskyMAA Smartphone Application for Customized Frequency Table Selection in Cochlear ImplantsOtol Neurotol201738e253e26110.1097/MAO.0000000000001409288063355556943
WonJHDrennanWRNieKJameysonEMRubinsteinJTAcoustic temporal modulation detection and speech perception in cochlear implant listenersJ. Acoust. Soc. Am.20111303763882011ASAJ..130..376W10.1121/1.3592521217869063155593
SpahrAJDevelopment and validation of the AzBio sentence listsEar Hear.20123311211710.1097/AUD.0b013e31822c2549218291344643855
AzadpourMMcKayCMSvirskyMAEffect of Pulse Rate on Loudness Discrimination in Cochlear Implant UsersJ. Assoc. Res. Otolaryngol.20181928729910.1007/s10162-018-0658-8295321905962473
FaulknerARosenSNormanCThe right information may matter more than frequency-place alignment: simulations of frequency-aligned and upward shifting cochlear implant processors for a shallow electrode array insertionEar Hear.20062713915210.1097/01.aud.0000202357.40662.8500003446-200604000-00005[pii]16518142
Brochier, T. et al. From Microphone to Phoneme: An End-to-End Computational Neural Model for Predicting Speech Perception with Cochlear Implants. IEEE Trans Biomed Eng PP, https://doi.org/10.1109/TBME.2022.3167113 (2022).
ShannonRVZengFGKamathVWygonskiJEkelidMSpeech recognition with primarily temporal cuesScience19952703033041995Sci...270..303S1:CAS:528:DyaK2MXoslehsL8%3D10.1126/science.270.5234.3037569981
BiererJASpindlerEBiererSMWrightRAn Examination of Sources of Variability Across the Consonant-Nucleus-Consonant Test in Cochlear Implant ListenersTrends in hearing2016201810.1177/2331216516646556
StaffordRCStaffordJWWellsJDLoizouPCKellerMDVocoder simulations of highly focused cochlear stimulation with limited dynamic range and discriminable stepsEar Hear.20143526227010.1097/AUD.0b013e3182a768e824322978
XuLThompsonCSPfingstBERelative contributions of spectral and temporal cues for phoneme recognitionJ. Acoust. Soc. Am.2005117325532672005ASAJ..117.3255X10.1121/1.188640515957791
Vaswani A., S. N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L., Polosukhin, I. in Neural Information Processing Systems. 5998–6008.
BruceICA stochastic model of the electrically stimulated auditory nerve: pulse-train responseIEEE Trans Biomed Eng1999466306371:STN:280:DyaK1M3osFemtQ%3D%3D10.1109/10.76493910356869
FriesenLMShannonRVBaskentDWangXSpeech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implantsJ. Acoust. Soc. Am.2001110115011632001ASAJ..110.1150F1:STN:280:DC%2BD3Mvns1artw%3D%3D10.1121/1.138153811519582
MonaghanJJMCarlyonRPDeeksJMModulation Depth Discrimination by Cochlear Implant UsersJ. Assoc. Res. Otolaryngol.20222328529910.1007/s10162-022-00834-6350806848964891
BingabrMEspinoza-VarasBLoizouPCSimulating the effect of spread of excitation in cochlear implantsHear Res2008241737910.1016/j.heares.2008.04.012185561602596864
Grange, J. A., Culling, J. F., Harris, N. S. L. & Bergfeld, S. Cochlear implant simulator with independent representation of the full spiral ganglion. J. Acoust. Soc. Am. 142, EL484, https://doi.org/10.1121/1.5009602 (2017).
FraserMMcKayCMTemporal modulation transfer functions in cochlear implantees using a method that limits overall loudness cuesHear Res2012283596910.1016/j.heares.2011.11.009221464253314947
LoizouPCDormanMFitzkeJThe effect of reduced dynamic range on speech understanding: implications for patients with cochlear implantsEar Hear.20002125311:STN:280:DC%2BD3c7ntVentg%3D%3D10.1097/00003446-200002000-0000610708071
WoutersJMcDermottHJFrancartTSound Coding in Cochlear ImplantsIeee Signal Proc Mag20153267802015ISPM...32...67W10.1109/Msp.2014.2371671
KohlrauschAFasselRDauTThe influence of carrier level and frequency on modulation and beat-detection thresholds for sinusoidal carriersJ. Acoust. Soc. Am.20001087237342000ASAJ..108..723K1:STN:280:DC%2BC2sbivVWmug%3D%3D10.1121/1.42960510955639
TakanenMBruceICSeeberBUPhenomenological modelling of electrically stimulated auditory nerve fibers: A reviewNetwork20162715718510.1080/0954898X.2016.121941227573993
FitzgeraldMBProsolovichKTanCTGlassmanEKSvirskyMASelf-Selection of Frequency Tables with Bilateral Mismatches in an Acoustic Simulation of a Cochlear ImplantJ. Am. Acad. Audiol.20172838539410.3766/jaaa.15077285347295563263
LoizouPCDormanMPoroyOSpahrTSpeech recognition by normal-hearing and cochlear implant listeners as a function of intensity resolutionJ. Acoust. Soc. Am.2000108237723872000ASAJ..108.2377L1:STN:280:DC%2BD3M%2FnsVGiug%3D%3D10.1121/1.131755711108378
Oxenham, A. J. & Kreft, H. A. Speech Perception in Tones and Noise via Cochlear Implants Reveals Influence of Spectral Resolution on Temporal Processing. Trends in hearing 18, https://doi.org/10.1177/2331216514553783 (2014).
GoupellMJDravesGTLitovskyRYRecognition of vocoded words and sentences in quiet and multi-talker babble with children and adultsPLoS ONE2020151:CAS:528:DC%2BB3MXlsF2mtg%3D%3D10.1371/journal.pone.0244632333734277771688
Weerts, L. R. S., Clopath C.; Goodman D. F. M. . The Psychometrics of Automatic Speech Recognition. bioRxiv, https://doi.org/10.1101/2021.04.19.440438 (2021).
Radford, A. K., J.W.; Xu, T.; Brockman, G.; McLeavey, C.; Sutskever, I. Robust Speech Recognition via Large-ScaleWeak Supervision. arXiv, https://doi.org/10.48550/arXiv.2212.04356 (2022).
Shannon, R. V., Fu, Q. J. & Galvin, J., 3rd. The number of spectral channels required for speech recognition depends on the difficulty of the listening situation. Acta Otolaryngol Suppl, 50–54, https://doi.org/10.1080/03655230410017562 (2004).
RossbachJKollmeierBMeyerBTA model of speech recognition for hearing-impaired listeners based on deep learningJ. Acoust. Soc. Am.202215114172022ASAJ..151.1417R10.1121/10.000941135364918
FuQJTemporal processing and speech recognition in cochlear implant usersNeuroreport2002131635163910.1097/00001756-200209160-0001312352617
DormanMFLoizouPCRaineyDSpeech intelligibility as a function of the number of channels of stimulation for signal processors using sine-wave and noise-band outputsJ. Acoust. Soc. Am.1997102240324111997ASAJ..102.2403D1:STN:280:DyaK1c%2FgvFeitQ%3D%3D10.1121/1.4196039348698
LoebachJLPisoniDBPerceptual learning of spectrally degraded speech and environmental soundsJ. Acoust. Soc. Am.2008123112611392008ASAJ..123.1126L10.1121/1.282345318247913
ZengFGSpeech dynamic range and its effect on cochlear implant performanceJ. Acoust. Soc. Am.20021113773862002ASAJ..111..377Z10.1121/1.142392611831811
DavisMHJohnsrudeISHervais-AdelmanATaylorKMcGettiganCLexical information drives perceptual learning of distorted speech: evidence from the comprehension of noise-vocoded sentencesJ Exp Psychol Gen200513422224110.1037/0096-3445.134.2.22215869347
GiffordRHSunderhausLWDawantBMLabadieRFNobleJHCochlear implant spectral bandwidth for optimizing electric and acoustic stimulation (EAS)Hear Res202242610.1016/j.heares.2022.1085843598596410036878
KreftHADonaldsonGSNelsonDAEffects of pulse rate and electrode array design on intensity discrimination in cochlear implant usersJ. Acoust. Soc. Am.2004116225822682004ASAJ..116.2258K10.1121/1.178687115532657
Hervais-AdelmanADavisMHJohnsrudeISCarlyonRPPerceptual learning of noise vocoded words: effects of feedback and lexicalityJ. Exp. Psychol. Hum. Percept. Perform.20083446047410.1037/0096-1523.34.2.46018377182
SouzaPRosenSEffects of envelope bandwidth on the intelligibility of sine- and noise-vocoded speechJ. Acoust. Soc. Am.20091267928052009ASAJ..126..792S10.1121/1.3158835196400442730710
SagiEAzadpourMNeukamJCapachNHSvirskyMAReducing interaural tonotopic mismatch preserves binaural unmasking in cochlear implant simulations of single-sided deafnessJ. Acoust. Soc. Am.202115023162021ASAJ..150.2316S10.1121/10.0006446347174908637719
NelsonDASchmitzJLDonaldsonGSViemeisterNFJavelEIntensity discrimination as a function of stimulus level with electric stimulationJ. Acoust. Soc. Am.1996100239324141996ASAJ..100.2393N1:STN:280:DyaK2s%2FhvFSgsg%3D%3D10.1121/1.4179498865646
LM Friesen (73173_CR15) 2001; 110
MH Davis (73173_CR17) 2005; 134
P Souza (73173_CR9) 2009; 126
MF Dorman (73173_CR6) 1997; 102
DM Landsberger (73173_CR25) 2015; 36
FG Zeng (73173_CR31) 2002; 111
M Bingabr (73173_CR12) 2008; 241
RC Stafford (73173_CR13) 2014; 35
A Hervais-Adelman (73173_CR16) 2008; 34
AJ Spahr (73173_CR19) 2012; 33
QJ Fu (73173_CR28) 2002; 13
HA Kreft (73173_CR41) 2004; 116
RV Shannon (73173_CR5) 1995; 270
D Jethanamest (73173_CR11) 2017; 38
JA Bierer (73173_CR23) 2016; 20
M Takanen (73173_CR39) 2016; 27
A Faulkner (73173_CR24) 2006; 27
MB Fitzgerald (73173_CR10) 2017; 28
73173_CR20
RH Gifford (73173_CR26) 2022; 426
E Sagi (73173_CR27) 2021; 150
73173_CR1
73173_CR40
73173_CR2
PC Loizou (73173_CR34) 2000; 108
DA Nelson (73173_CR36) 1996; 100
73173_CR22
JJM Monaghan (73173_CR33) 2022; 23
M Fraser (73173_CR32) 2012; 283
73173_CR7
M Azadpour (73173_CR35) 2018; 19
MJ Goupell (73173_CR21) 2020; 15
JH Won (73173_CR29) 2011; 130
JL Loebach (73173_CR18) 2008; 123
J Rossbach (73173_CR3) 2022; 151
A Kohlrausch (73173_CR30) 2000; 108
J Wouters (73173_CR4) 2015; 32
L Xu (73173_CR8) 2005; 117
73173_CR37
PC Loizou (73173_CR14) 2000; 21
IC Bruce (73173_CR38) 1999; 46
37292787 - bioRxiv. 2023 May 24:2023.05.23.541843. doi: 10.1101/2023.05.23.541843
37461629 - Res Sq. 2023 Jun 29:rs.3.rs-3085032. doi: 10.21203/rs.3.rs-3085032/v1
References_xml – reference: BruceICA stochastic model of the electrically stimulated auditory nerve: pulse-train responseIEEE Trans Biomed Eng1999466306371:STN:280:DyaK1M3osFemtQ%3D%3D10.1109/10.76493910356869
– reference: SagiEAzadpourMNeukamJCapachNHSvirskyMAReducing interaural tonotopic mismatch preserves binaural unmasking in cochlear implant simulations of single-sided deafnessJ. Acoust. Soc. Am.202115023162021ASAJ..150.2316S10.1121/10.0006446347174908637719
– reference: NelsonDASchmitzJLDonaldsonGSViemeisterNFJavelEIntensity discrimination as a function of stimulus level with electric stimulationJ. Acoust. Soc. Am.1996100239324141996ASAJ..100.2393N1:STN:280:DyaK2s%2FhvFSgsg%3D%3D10.1121/1.4179498865646
– reference: ZengFGSpeech dynamic range and its effect on cochlear implant performanceJ. Acoust. Soc. Am.20021113773862002ASAJ..111..377Z10.1121/1.142392611831811
– reference: Grange, J. A., Culling, J. F., Harris, N. S. L. & Bergfeld, S. Cochlear implant simulator with independent representation of the full spiral ganglion. J. Acoust. Soc. Am. 142, EL484, https://doi.org/10.1121/1.5009602 (2017).
– reference: Shannon, R. V., Fu, Q. J. & Galvin, J., 3rd. The number of spectral channels required for speech recognition depends on the difficulty of the listening situation. Acta Otolaryngol Suppl, 50–54, https://doi.org/10.1080/03655230410017562 (2004).
– reference: KohlrauschAFasselRDauTThe influence of carrier level and frequency on modulation and beat-detection thresholds for sinusoidal carriersJ. Acoust. Soc. Am.20001087237342000ASAJ..108..723K1:STN:280:DC%2BC2sbivVWmug%3D%3D10.1121/1.42960510955639
– reference: DormanMFLoizouPCRaineyDSpeech intelligibility as a function of the number of channels of stimulation for signal processors using sine-wave and noise-band outputsJ. Acoust. Soc. Am.1997102240324111997ASAJ..102.2403D1:STN:280:DyaK1c%2FgvFeitQ%3D%3D10.1121/1.4196039348698
– reference: LoizouPCDormanMFitzkeJThe effect of reduced dynamic range on speech understanding: implications for patients with cochlear implantsEar Hear.20002125311:STN:280:DC%2BD3c7ntVentg%3D%3D10.1097/00003446-200002000-0000610708071
– reference: SouzaPRosenSEffects of envelope bandwidth on the intelligibility of sine- and noise-vocoded speechJ. Acoust. Soc. Am.20091267928052009ASAJ..126..792S10.1121/1.3158835196400442730710
– reference: Weerts, L. R. S., Clopath C.; Goodman D. F. M. . The Psychometrics of Automatic Speech Recognition. bioRxiv, https://doi.org/10.1101/2021.04.19.440438 (2021).
– reference: XuLThompsonCSPfingstBERelative contributions of spectral and temporal cues for phoneme recognitionJ. Acoust. Soc. Am.2005117325532672005ASAJ..117.3255X10.1121/1.188640515957791
– reference: Radford, A. K., J.W.; Xu, T.; Brockman, G.; McLeavey, C.; Sutskever, I. Robust Speech Recognition via Large-ScaleWeak Supervision. arXiv, https://doi.org/10.48550/arXiv.2212.04356 (2022).
– reference: GoupellMJDravesGTLitovskyRYRecognition of vocoded words and sentences in quiet and multi-talker babble with children and adultsPLoS ONE2020151:CAS:528:DC%2BB3MXlsF2mtg%3D%3D10.1371/journal.pone.0244632333734277771688
– reference: StaffordRCStaffordJWWellsJDLoizouPCKellerMDVocoder simulations of highly focused cochlear stimulation with limited dynamic range and discriminable stepsEar Hear.20143526227010.1097/AUD.0b013e3182a768e824322978
– reference: LandsbergerDMSvrakicMRolandJTJrSvirskyMThe Relationship Between Insertion Angles, Default Frequency Allocations, and Spiral Ganglion Place Pitch in Cochlear ImplantsEar Hear.201536e20721310.1097/AUD.0000000000000163258606244549170
– reference: BiererJASpindlerEBiererSMWrightRAn Examination of Sources of Variability Across the Consonant-Nucleus-Consonant Test in Cochlear Implant ListenersTrends in hearing2016201810.1177/2331216516646556
– reference: ShannonRVZengFGKamathVWygonskiJEkelidMSpeech recognition with primarily temporal cuesScience19952703033041995Sci...270..303S1:CAS:528:DyaK2MXoslehsL8%3D10.1126/science.270.5234.3037569981
– reference: GiffordRHSunderhausLWDawantBMLabadieRFNobleJHCochlear implant spectral bandwidth for optimizing electric and acoustic stimulation (EAS)Hear Res202242610.1016/j.heares.2022.1085843598596410036878
– reference: SpahrAJDevelopment and validation of the AzBio sentence listsEar Hear.20123311211710.1097/AUD.0b013e31822c2549218291344643855
– reference: FuQJTemporal processing and speech recognition in cochlear implant usersNeuroreport2002131635163910.1097/00001756-200209160-0001312352617
– reference: JethanamestDAzadpourMZemanAMSagiESvirskyMAA Smartphone Application for Customized Frequency Table Selection in Cochlear ImplantsOtol Neurotol201738e253e26110.1097/MAO.0000000000001409288063355556943
– reference: WonJHDrennanWRNieKJameysonEMRubinsteinJTAcoustic temporal modulation detection and speech perception in cochlear implant listenersJ. Acoust. Soc. Am.20111303763882011ASAJ..130..376W10.1121/1.3592521217869063155593
– reference: FraserMMcKayCMTemporal modulation transfer functions in cochlear implantees using a method that limits overall loudness cuesHear Res2012283596910.1016/j.heares.2011.11.009221464253314947
– reference: MonaghanJJMCarlyonRPDeeksJMModulation Depth Discrimination by Cochlear Implant UsersJ. Assoc. Res. Otolaryngol.20222328529910.1007/s10162-022-00834-6350806848964891
– reference: RossbachJKollmeierBMeyerBTA model of speech recognition for hearing-impaired listeners based on deep learningJ. Acoust. Soc. Am.202215114172022ASAJ..151.1417R10.1121/10.000941135364918
– reference: Vaswani A., S. N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L., Polosukhin, I. in Neural Information Processing Systems. 5998–6008.
– reference: Hervais-AdelmanADavisMHJohnsrudeISCarlyonRPPerceptual learning of noise vocoded words: effects of feedback and lexicalityJ. Exp. Psychol. Hum. Percept. Perform.20083446047410.1037/0096-1523.34.2.46018377182
– reference: LoizouPCDormanMPoroyOSpahrTSpeech recognition by normal-hearing and cochlear implant listeners as a function of intensity resolutionJ. Acoust. Soc. Am.2000108237723872000ASAJ..108.2377L1:STN:280:DC%2BD3M%2FnsVGiug%3D%3D10.1121/1.131755711108378
– reference: KreftHADonaldsonGSNelsonDAEffects of pulse rate and electrode array design on intensity discrimination in cochlear implant usersJ. Acoust. Soc. Am.2004116225822682004ASAJ..116.2258K10.1121/1.178687115532657
– reference: DavisMHJohnsrudeISHervais-AdelmanATaylorKMcGettiganCLexical information drives perceptual learning of distorted speech: evidence from the comprehension of noise-vocoded sentencesJ Exp Psychol Gen200513422224110.1037/0096-3445.134.2.22215869347
– reference: LoebachJLPisoniDBPerceptual learning of spectrally degraded speech and environmental soundsJ. Acoust. Soc. Am.2008123112611392008ASAJ..123.1126L10.1121/1.282345318247913
– reference: WoutersJMcDermottHJFrancartTSound Coding in Cochlear ImplantsIeee Signal Proc Mag20153267802015ISPM...32...67W10.1109/Msp.2014.2371671
– reference: BingabrMEspinoza-VarasBLoizouPCSimulating the effect of spread of excitation in cochlear implantsHear Res2008241737910.1016/j.heares.2008.04.012185561602596864
– reference: AzadpourMMcKayCMSvirskyMAEffect of Pulse Rate on Loudness Discrimination in Cochlear Implant UsersJ. Assoc. Res. Otolaryngol.20181928729910.1007/s10162-018-0658-8295321905962473
– reference: FitzgeraldMBProsolovichKTanCTGlassmanEKSvirskyMASelf-Selection of Frequency Tables with Bilateral Mismatches in an Acoustic Simulation of a Cochlear ImplantJ. Am. Acad. Audiol.20172838539410.3766/jaaa.15077285347295563263
– reference: Oxenham, A. J. & Kreft, H. A. Speech Perception in Tones and Noise via Cochlear Implants Reveals Influence of Spectral Resolution on Temporal Processing. Trends in hearing 18, https://doi.org/10.1177/2331216514553783 (2014).
– reference: TakanenMBruceICSeeberBUPhenomenological modelling of electrically stimulated auditory nerve fibers: A reviewNetwork20162715718510.1080/0954898X.2016.121941227573993
– reference: Brochier, T. et al. From Microphone to Phoneme: An End-to-End Computational Neural Model for Predicting Speech Perception with Cochlear Implants. IEEE Trans Biomed Eng PP, https://doi.org/10.1109/TBME.2022.3167113 (2022).
– reference: FriesenLMShannonRVBaskentDWangXSpeech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implantsJ. Acoust. Soc. Am.2001110115011632001ASAJ..110.1150F1:STN:280:DC%2BD3Mvns1artw%3D%3D10.1121/1.138153811519582
– reference: FaulknerARosenSNormanCThe right information may matter more than frequency-place alignment: simulations of frequency-aligned and upward shifting cochlear implant processors for a shallow electrode array insertionEar Hear.20062713915210.1097/01.aud.0000202357.40662.8500003446-200604000-00005[pii]16518142
– volume: 38
  start-page: e253
  year: 2017
  ident: 73173_CR11
  publication-title: Otol Neurotol
  doi: 10.1097/MAO.0000000000001409
– volume: 130
  start-page: 376
  year: 2011
  ident: 73173_CR29
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.3592521
– volume: 116
  start-page: 2258
  year: 2004
  ident: 73173_CR41
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.1786871
– volume: 28
  start-page: 385
  year: 2017
  ident: 73173_CR10
  publication-title: J. Am. Acad. Audiol.
  doi: 10.3766/jaaa.15077
– ident: 73173_CR2
  doi: 10.1101/2021.04.19.440438
– volume: 117
  start-page: 3255
  year: 2005
  ident: 73173_CR8
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.1886405
– volume: 270
  start-page: 303
  year: 1995
  ident: 73173_CR5
  publication-title: Science
  doi: 10.1126/science.270.5234.303
– volume: 36
  start-page: e207
  year: 2015
  ident: 73173_CR25
  publication-title: Ear Hear.
  doi: 10.1097/AUD.0000000000000163
– volume: 241
  start-page: 73
  year: 2008
  ident: 73173_CR12
  publication-title: Hear Res
  doi: 10.1016/j.heares.2008.04.012
– volume: 21
  start-page: 25
  year: 2000
  ident: 73173_CR14
  publication-title: Ear Hear.
  doi: 10.1097/00003446-200002000-00006
– volume: 150
  start-page: 2316
  year: 2021
  ident: 73173_CR27
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/10.0006446
– volume: 108
  start-page: 723
  year: 2000
  ident: 73173_CR30
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.429605
– volume: 20
  start-page: 1
  year: 2016
  ident: 73173_CR23
  publication-title: Trends in hearing
  doi: 10.1177/2331216516646556
– ident: 73173_CR1
  doi: 10.48550/arXiv.2212.04356
– volume: 27
  start-page: 139
  year: 2006
  ident: 73173_CR24
  publication-title: Ear Hear.
  doi: 10.1097/01.aud.0000202357.40662.8500003446-200604000-00005[pii]
– ident: 73173_CR7
  doi: 10.1080/03655230410017562
– volume: 33
  start-page: 112
  year: 2012
  ident: 73173_CR19
  publication-title: Ear Hear.
  doi: 10.1097/AUD.0b013e31822c2549
– volume: 15
  year: 2020
  ident: 73173_CR21
  publication-title: PLoS ONE
  doi: 10.1371/journal.pone.0244632
– volume: 23
  start-page: 285
  year: 2022
  ident: 73173_CR33
  publication-title: J. Assoc. Res. Otolaryngol.
  doi: 10.1007/s10162-022-00834-6
– ident: 73173_CR20
  doi: 10.1121/1.5009602
– volume: 111
  start-page: 377
  year: 2002
  ident: 73173_CR31
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.1423926
– ident: 73173_CR40
– volume: 35
  start-page: 262
  year: 2014
  ident: 73173_CR13
  publication-title: Ear Hear.
  doi: 10.1097/AUD.0b013e3182a768e8
– volume: 134
  start-page: 222
  year: 2005
  ident: 73173_CR17
  publication-title: J Exp Psychol Gen
  doi: 10.1037/0096-3445.134.2.222
– volume: 151
  start-page: 1417
  year: 2022
  ident: 73173_CR3
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/10.0009411
– volume: 34
  start-page: 460
  year: 2008
  ident: 73173_CR16
  publication-title: J. Exp. Psychol. Hum. Percept. Perform.
  doi: 10.1037/0096-1523.34.2.460
– volume: 108
  start-page: 2377
  year: 2000
  ident: 73173_CR34
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.1317557
– volume: 126
  start-page: 792
  year: 2009
  ident: 73173_CR9
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.3158835
– ident: 73173_CR37
  doi: 10.1109/TBME.2022.3167113
– volume: 27
  start-page: 157
  year: 2016
  ident: 73173_CR39
  publication-title: Network
  doi: 10.1080/0954898X.2016.1219412
– volume: 13
  start-page: 1635
  year: 2002
  ident: 73173_CR28
  publication-title: Neuroreport
  doi: 10.1097/00001756-200209160-00013
– volume: 110
  start-page: 1150
  year: 2001
  ident: 73173_CR15
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.1381538
– ident: 73173_CR22
  doi: 10.1177/2331216514553783
– volume: 426
  year: 2022
  ident: 73173_CR26
  publication-title: Hear Res
  doi: 10.1016/j.heares.2022.108584
– volume: 123
  start-page: 1126
  year: 2008
  ident: 73173_CR18
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.2823453
– volume: 32
  start-page: 67
  year: 2015
  ident: 73173_CR4
  publication-title: Ieee Signal Proc Mag
  doi: 10.1109/Msp.2014.2371671
– volume: 19
  start-page: 287
  year: 2018
  ident: 73173_CR35
  publication-title: J. Assoc. Res. Otolaryngol.
  doi: 10.1007/s10162-018-0658-8
– volume: 100
  start-page: 2393
  year: 1996
  ident: 73173_CR36
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.417949
– volume: 102
  start-page: 2403
  year: 1997
  ident: 73173_CR6
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.419603
– volume: 46
  start-page: 630
  year: 1999
  ident: 73173_CR38
  publication-title: IEEE Trans Biomed Eng
  doi: 10.1109/10.764939
– volume: 283
  start-page: 59
  year: 2012
  ident: 73173_CR32
  publication-title: Hear Res
  doi: 10.1016/j.heares.2011.11.009
– reference: 37292787 - bioRxiv. 2023 May 24:2023.05.23.541843. doi: 10.1101/2023.05.23.541843
– reference: 37461629 - Res Sq. 2023 Jun 29:rs.3.rs-3085032. doi: 10.21203/rs.3.rs-3085032/v1
SSID ssj0000529419
Score 2.4362483
Snippet Acoustic vocoders play a key role in simulating the speech information available to cochlear implant (CI) users. Traditionally, the intelligibility of vocoder...
Abstract Acoustic vocoders play a key role in simulating the speech information available to cochlear implant (CI) users. Traditionally, the intelligibility of...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
springer
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Publisher
StartPage 24056
SubjectTerms 631/114/1305
631/378
692/308/575
Acoustics
Adult
Auditory discrimination learning
Cochlea
Cochlear Implants
Deep Learning
Female
Humanities and Social Sciences
Humans
Information processing
Male
multidisciplinary
Psychophysics
Science
Science (multidisciplinary)
Simulation
Speech
Speech Intelligibility - physiology
Speech Perception - physiology
Speech recognition
Transplants & implants
Voice recognition
SummonAdditionalLinks – databaseName: DOAJ : Directory of Open Access Journals [open access]
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwELZQJSQuiPJMKchI3MBq_EjsHEtFVVUqJyr1ZvkxZvdAdkWWQ_89Yzu7dHmICzmsok02sWbGO9_I4-8j5G3npElCd2yAtmUq-IG5VgOLjqfMggM85M3JV5_6i2t1edPd3JH6yj1hlR64Gu4kaDeEGLnqfIfJEzweSmqfBAjQtVrHnHenmKqs3mJQfJh3ybTSnEyYqfJuMqGYxpwpWb-XiQph_59Q5u_Nkr-smJZEdP6IPJwRJD2tIz8k92B8TO5XTcnbJ2RZNXzxpzQCrOksC_GFFs0bulnRmeAb6LQGCAs6c6dmD-E5xb_IovBFp-XXWdtroqtEz1ZhkR9Gl_iC3D3zlFyff_x8dsFmPQUW1CA2TAZlXNStz_2XygfjEX44x0VvRIiZsBgyHaBL-CFCChIkpD62WoMBn7x8Rg7G1QgvCA2tz9hSQnSdSqn1MnJ0LgfujEMM0ZB3W9vadaXNsGW5WxpbPWHRE7Z4wvYN-ZDNv7szU16XLzAQ7BwI9l-B0JDjrfPsPA8nKzmWgx0WhV1D3uwu4wzKyyJuBDRouadHFCx1Q55XX-9GknXjBaKwhpi9KNgb6v6VcbkoLN1YaOoBwWFD3m8D5ue4_m6Lo_9hi5fkgciRnjtv1DE52Hz7Dq8QPG386zJPfgDdIxlU
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: Health & Medical Collection
  dbid: 7X7
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1Lb9QwELagCIkL4k2gICNxA6uJ7cTJCUFFVSHBiUp7s_wYd_fAZmmWA_-eGce71fLKIYrydDx-fPaMv4-x161TfZKmFQPUtdDBD8LVBkR0TSIWHGgCLU7-_KU7v9CfFu2iTLhNJaxy1ybmhjqOgebIT1SDwL5FeN--23wXpBpF3tUioXGT3SLqMhp8mYXZz7GQF0s3Q1krU6v-ZML-itaUSS0M9pxKdAf9Uabt_xvW_DNk8je_ae6Ozu6xuwVH8vez4e-zG7B-wG7PypI_H7LVrOSLj_IIsOFFHOKSZ-Ubvh15ofkGPm0AwpIXBlWyEx5zbCizzhefVt-KwtfEx8RPx7Ckl_EVfoBiaB6xi7OPX0_PRVFVEEEPcitU0L2LpvYUhal96D2CEOca2fUyRKItBiIFdAl3MqSgQEHqYm0M9OCTV4_Z0Xpcw1PGQ-0JYSqIrtUp1V7FBk3cQON6h0iiYm92eWs3M3mGzU5v1dvZEhYtYbMlbFexD5T9-zuJ-DqfGK8ubalHNhg3hBgb3foWsRR43LQyPkmQgOinYsc749lSGyd7XXYq9mp_GesROUfcGjBD8z0dYmFlKvZktvU-JaQeLxGLVaw_KAUHST28sl4tM1c3DjfNgBCxYm93BeY6Xf_Oi2f__43n7I6kMkyRNfqYHW2vfsALBEdb_zLXgF8wEg-m
  priority: 102
  providerName: ProQuest
– databaseName: Springer Nature OA Free Journals
  dbid: C6C
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nb9QwEB2VIiQuFd-kFGQkbhCR2E6cHGFFVSHBiUq9WbYz7u6BZNUsB_49M06yaKEcyCGKYiexZmzPc2b8BuBN5VQTpanyFosi18G3uSsM5p0rI7PgYBl4c_KXr_XFpf58VV0dgVz2wqSg_URpmabpJTrs_UiGhjeDSZ0bMnkqr-_AXaZu5zC-Vb3a_1dhz5Uu23l_TKGaWx49sEGJqv82fPl3mOQfvtJkgs4fwMmMHcWHqbUP4Qj7R3Bvyib58zFspuy99KjoELdiTghxLVK2G7EbxEztjWLcIoa1mFlTWTd0LWhyTLm9xLj5Pmf1GsUQxWoIa36Z2NAHOG7mCVyef_q2usjnTAp50K3c5SroxnWm8Bx5qX1oPAEP50pZNzJ0TFWMTAToIp1kiEGhwlh3hTHYoI9ePYXjfujxOYhQeEaVCjtX6RgLr7qS1Fpi6RpH6CGDt4ts7XYizLDJ0a0aO2nCkiZs0oStM_jI4t_XZLLrdGO4ubaz8m0wrg1dV-rKV4Sf0NOhlfFRokRCPBmcLcqz8wgcrSppIVjRcrDK4PW-mMYOO0RcjyTQVKcm_KtMBs8mXe9bwhnjJeGvDJqDXnDQ1MOSfrNO_Ny0xDQtwcIM3i0d5ne7_i2L0_-r_gLuS-7THF2jz-B4d_MDXxJA2vlXaUT8Au4vDYA
  priority: 102
  providerName: Springer Nature
Title Employing deep learning model to evaluate speech information in acoustic simulations of Cochlear implants
URI https://link.springer.com/article/10.1038/s41598-024-73173-6
https://www.ncbi.nlm.nih.gov/pubmed/39402071
https://www.proquest.com/docview/3116458715
https://www.proquest.com/docview/3116677437
https://pubmed.ncbi.nlm.nih.gov/PMC11479273
https://doaj.org/article/c7a9cdd145b5439ebbbb437bf2e2e750
Volume 14
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bb9MwFD7aRaC9IK4jMCoj8QaBxHbi5AGhrto0VdqEgEp9i2znZK0ESWk7afv3HDtJUaG8kIckytU65zjnc2x_H8CbRIus4ioJc4yiUFqThzpSGJY6rhwLDsbWTU6-vEovJnI8TaZ70MsddQZc7WzaOT2pyfL7-9ufd5-own9sp4xnH1aUhNxEMS5DRelQhOk-HFJmUk7K4bKD-y3XN89lnHdzZ3bfegT3nVg4j1S8lao8o_8uGPr3aMo_ulR9pjp_CA86iMmGbUw8gj2sH8O9VnTy7gnMW5FfupWViAvW6UZcMy-Kw9YN6xjAka0WiHbGOnJV50LaZ_QN9RJgbDX_0Yl_rVhTsVFjZ-5hbE4vcMNrnsLk_Ozb6CLsBBdCK3O-DoWVmS5VZNwATWlsZgifaB3zNOO2dIzG6PgCdUUrbisrUGCVlpFSmKGpjHgGB3VT43NgNjIOfAosdSKrKjKijMn7McY60wQyAnjb27ZYtLwahe8PF1nROqUgpxTeKUUawKkz_-ZKx4ntDzTL66KrYoVVOrdlGcvEJASz0NAihTIVR44EjAI46Z1X9HFWiJjaiwm1GpMAXm9OUxVz_Sa6RjKovyYlmCxUAMetrzcl6WMlgGwrCraKun2mns88jTe1RFVO6DGAd33A_C7Xv23x4v_f9BKOuAt1NyBHnsDBenmDrwhTrc0A9tVUDeBwOBx_HdP29Ozq8xc6OkpHA_-fYuCr0i8vHiY4
linkProvider Scholars Portal
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Zb9NAEB6VVAheEDeGAosET2DV3l1fDwjR0iqlhxBqpb4tu-txkwfiUAeh_il-IzM-UoXrrXmwovjazLHz2TM7H8DLxKq8klkSFhhFofauCG2UYVjauOIuOBh7Xpx8eJSOT_TH0-R0DX4Oa2G4rHKYE9uJuqw9vyPfVDEB-4TgffJu_i1k1ijOrg4UGp1Z7OPFD3pka97ufSD9vpJyd-d4exz2rAKh14VchMrr3JZZ5LgKUTufOwrC1sYyzaUvuW0vclM8W9FG-sorVFilZZRlmKOrnKLrXoN1rQgqjGB9a-fo0-flWx3Om-m46FfnRCrfbChC8io2qcOMYrUK05UI2BIF_A3d_lmk-Vumtg2Au7fhVo9cxfvO1O7AGs7uwvWOy_LiHkw77mA6VZSIc9HTUZyJlmtHLGrRNxZH0cwR_UT0PVvZMui7oKm5ZRYTzfRrzynWiLoS27Wf8MXElG7AVTv34eRKJP4ARrN6ho9A-MgxplVY2kRXVeRUGZNRxRjb3BJ2CeD1IFsz79p1mDbNrnLTacKQJkyrCZMGsMXiXx7JrbbbH-rzM9N7rvGZLXxZxjpxCaE3dPTRKnOVRImEtwLYGJRnev9vzKW1BvBiuZs8l9MxdoYk0PaYlNC3ygJ42Ol6ORLmq5eE_gLIV6xgZaire2bTSdsdnB5ws4JAaQBvBoO5HNe_ZfH4_3_jOdwYHx8emIO9o_0ncFOyPXNdj96A0eL8Oz4laLZwz3p_EPDlql3wF9RCTnc
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6VIhAXxJtAASPBCaJNbCdODghBy6qlUHGg0t6M7Uy6eyBZmkWof41fx9hJtlpet-4hWm1e3nl4vmTG8wE8y4woaq6yuMQkiaWzZWwShXFl0tp3wcHU-cXJH4_y_WP5fpbNtuDnuBbGl1WOc2KYqKvW-XfkE5ESsM8I3meTeiiL-LQ3fb38FnsGKZ9pHek0ehM5xLMf9PjWvTrYI10_53z67vPufjwwDMROlnwVCycLU6nE-opEaV1hKSAbk_K84K7yLXzRN8gzNW24q51AgXVeJUphgba2gq57CS4rQWGTfEnN1Pr9js-gybQc1ukkoph0FCv9ejYuY0VRW8T5RiwMlAF_w7l_lmv-lrMNoXB6A64PGJa96Y3uJmxhcwuu9KyWZ7dh0bMI06msQlyygZjihAXWHbZq2dBiHFm3RHRzNnRv9TZC3xlN0oFjjHWLrwO7WMfamu22bu4vxhZ0A1-_cweOL0Ted2G7aRu8D8wl1qNbgZXJZF0nVlQpmVeKqSkMqSOCF6Ns9bJv3KFDwl0UuteEJk3ooAmdR_DWi399pG-6HX5oT0_04MPaKVO6qkplZjPCcWjpI4WyNUeOhLwi2BmVp4eZoNPndhvB0_Vu8mGfmDENkkDDMTnhcKEiuNfrej0Sz1zPCQdGUGxYwcZQN_c0i3noE06PuqokeBrBy9Fgzsf1b1k8-P_feAJXyfH0h4Ojw4dwjXtz9gU-cge2V6ff8RFhtJV9HJyBwZeL9r5fldJRRw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Employing+deep+learning+model+to+evaluate+speech+information+in+acoustic+simulations+of+Cochlear+implants&rft.jtitle=Scientific+reports&rft.au=Sinha%2C+Rahul&rft.au=Azadpour%2C+Mahan&rft.date=2024-10-14&rft.pub=Nature+Publishing+Group+UK&rft.eissn=2045-2322&rft.volume=14&rft_id=info:doi/10.1038%2Fs41598-024-73173-6&rft_id=info%3Apmid%2F39402071&rft.externalDocID=PMC11479273
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2045-2322&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2045-2322&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2045-2322&client=summon