Speech prediction of a listener via EEG-based classification through subject-independent phase dissimilarity model
This study examines the consistency of cross-subject electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification. Repeated listening to audio induces consistent EEG phase alignments across trials for listeners. If the phase of EEG aligns more closely with ac...
Saved in:
Published in | Scientific reports Vol. 15; no. 1; pp. 26174 - 16 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
London
Nature Publishing Group UK
18.07.2025
Nature Publishing Group Nature Portfolio |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | This study examines the consistency of
cross-subject
electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification. Repeated listening to audio induces consistent EEG phase alignments across trials for listeners. If the phase of EEG aligns more closely with acoustics,
cross-subject
EEG phase tracking should also exhibit significant similarity. To test this hypothesis, we propose a generalized subject-independent phase dissimilarity model, which eliminates the requirement for training on individuals. Our proposed model assesses the duration and number of
cross-subject
EEG-phase-alignments, influencing accuracy. EEG responses were recorded from seventeen participants who listened three times to 22 unfamiliar one-minute passages from audiobooks. Our findings demonstrate that the EEG phase is consistent within repeated
cross-subject
trials. Our model achieved an impressive EEG-based classification accuracy of 74.96%. Furthermore, an average of nine distinct phasic templates from different participants is sufficient to effectively train the model, regardless of the duration of EEG phase alignments. Additionally, the duration of EEG-phase-alignments positively correlates with classification accuracy. These results indicate that predicting a listener’s speech is feasible by training the model with phasic templates from other listeners, owing to the consistent
cross-subject
EEG phase alignments with speech acoustics. |
---|---|
AbstractList | Abstract This study examines the consistency of cross-subject electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification. Repeated listening to audio induces consistent EEG phase alignments across trials for listeners. If the phase of EEG aligns more closely with acoustics, cross-subject EEG phase tracking should also exhibit significant similarity. To test this hypothesis, we propose a generalized subject-independent phase dissimilarity model, which eliminates the requirement for training on individuals. Our proposed model assesses the duration and number of cross-subject EEG-phase-alignments, influencing accuracy. EEG responses were recorded from seventeen participants who listened three times to 22 unfamiliar one-minute passages from audiobooks. Our findings demonstrate that the EEG phase is consistent within repeated cross-subject trials. Our model achieved an impressive EEG-based classification accuracy of 74.96%. Furthermore, an average of nine distinct phasic templates from different participants is sufficient to effectively train the model, regardless of the duration of EEG phase alignments. Additionally, the duration of EEG-phase-alignments positively correlates with classification accuracy. These results indicate that predicting a listener’s speech is feasible by training the model with phasic templates from other listeners, owing to the consistent cross-subject EEG phase alignments with speech acoustics. This study examines the consistency of cross-subject electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification. Repeated listening to audio induces consistent EEG phase alignments across trials for listeners. If the phase of EEG aligns more closely with acoustics, cross-subject EEG phase tracking should also exhibit significant similarity. To test this hypothesis, we propose a generalized subject-independent phase dissimilarity model, which eliminates the requirement for training on individuals. Our proposed model assesses the duration and number of cross-subject EEG-phase-alignments, influencing accuracy. EEG responses were recorded from seventeen participants who listened three times to 22 unfamiliar one-minute passages from audiobooks. Our findings demonstrate that the EEG phase is consistent within repeated cross-subject trials. Our model achieved an impressive EEG-based classification accuracy of 74.96%. Furthermore, an average of nine distinct phasic templates from different participants is sufficient to effectively train the model, regardless of the duration of EEG phase alignments. Additionally, the duration of EEG-phase-alignments positively correlates with classification accuracy. These results indicate that predicting a listener’s speech is feasible by training the model with phasic templates from other listeners, owing to the consistent cross-subject EEG phase alignments with speech acoustics. This study examines the consistency of cross-subject electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification. Repeated listening to audio induces consistent EEG phase alignments across trials for listeners. If the phase of EEG aligns more closely with acoustics, cross-subject EEG phase tracking should also exhibit significant similarity. To test this hypothesis, we propose a generalized subject-independent phase dissimilarity model, which eliminates the requirement for training on individuals. Our proposed model assesses the duration and number of cross-subject EEG-phase-alignments, influencing accuracy. EEG responses were recorded from seventeen participants who listened three times to 22 unfamiliar one-minute passages from audiobooks. Our findings demonstrate that the EEG phase is consistent within repeated cross-subject trials. Our model achieved an impressive EEG-based classification accuracy of 74.96%. Furthermore, an average of nine distinct phasic templates from different participants is sufficient to effectively train the model, regardless of the duration of EEG phase alignments. Additionally, the duration of EEG-phase-alignments positively correlates with classification accuracy. These results indicate that predicting a listener’s speech is feasible by training the model with phasic templates from other listeners, owing to the consistent cross-subject EEG phase alignments with speech acoustics. This study examines the consistency of cross-subject electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification. Repeated listening to audio induces consistent EEG phase alignments across trials for listeners. If the phase of EEG aligns more closely with acoustics, cross-subject EEG phase tracking should also exhibit significant similarity. To test this hypothesis, we propose a generalized subject-independent phase dissimilarity model, which eliminates the requirement for training on individuals. Our proposed model assesses the duration and number of cross-subject EEG-phase-alignments, influencing accuracy. EEG responses were recorded from seventeen participants who listened three times to 22 unfamiliar one-minute passages from audiobooks. Our findings demonstrate that the EEG phase is consistent within repeated cross-subject trials. Our model achieved an impressive EEG-based classification accuracy of 74.96%. Furthermore, an average of nine distinct phasic templates from different participants is sufficient to effectively train the model, regardless of the duration of EEG phase alignments. Additionally, the duration of EEG-phase-alignments positively correlates with classification accuracy. These results indicate that predicting a listener's speech is feasible by training the model with phasic templates from other listeners, owing to the consistent cross-subject EEG phase alignments with speech acoustics.This study examines the consistency of cross-subject electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification. Repeated listening to audio induces consistent EEG phase alignments across trials for listeners. If the phase of EEG aligns more closely with acoustics, cross-subject EEG phase tracking should also exhibit significant similarity. To test this hypothesis, we propose a generalized subject-independent phase dissimilarity model, which eliminates the requirement for training on individuals. Our proposed model assesses the duration and number of cross-subject EEG-phase-alignments, influencing accuracy. EEG responses were recorded from seventeen participants who listened three times to 22 unfamiliar one-minute passages from audiobooks. Our findings demonstrate that the EEG phase is consistent within repeated cross-subject trials. Our model achieved an impressive EEG-based classification accuracy of 74.96%. Furthermore, an average of nine distinct phasic templates from different participants is sufficient to effectively train the model, regardless of the duration of EEG phase alignments. Additionally, the duration of EEG-phase-alignments positively correlates with classification accuracy. These results indicate that predicting a listener's speech is feasible by training the model with phasic templates from other listeners, owing to the consistent cross-subject EEG phase alignments with speech acoustics. |
ArticleNumber | 26174 |
Author | Rauschecker, Josef P. Cheng, Gordon Malekmohammadi, Alireza |
Author_xml | – sequence: 1 givenname: Alireza orcidid: 0000-0002-1735-1048 surname: Malekmohammadi fullname: Malekmohammadi, Alireza email: alireza.malekmohammadi@tum.de organization: Institute for Cognitive Systems, Electrical Engineering, Technical University of Munich – sequence: 2 givenname: Josef P. surname: Rauschecker fullname: Rauschecker, Josef P. organization: Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center – sequence: 3 givenname: Gordon surname: Cheng fullname: Cheng, Gordon organization: Institute for Cognitive Systems, Electrical Engineering, Technical University of Munich |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/40681689$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kk1v1DAQhi1URMvSP8ABWeLCJeDPJD4hVC2lUiUO9G459mTjVdYOdlJp_z1mU0rLAUseW_Yz74w98xqdhRgAobeUfKSEt5-yoFK1FWGyooxyWR1foAtGhKwYZ-zsyf4cXea8J2VIpgRVr9C5IHVL61ZdoPRjArADnhI4b2cfA449Nnj0eYYACd97g7fb66ozGRy2o8nZ996aEzoPKS67Aeel24OdKx8cTFBMmPE0FA_sfOEPfjTJz0d8iA7GN-hlb8YMlw_rBt193d5dfatuv1_fXH25raxQYq64oarppWyU6mRjaCuF7CxVtu2LoT1hLVOSdoY61hkilGoEGGYbZXhnLN-gm1XWRbPXU_IHk446Gq9PBzHttEmztyNo1YOVqqlraEAUQcWZVMS5tuWuKzGL1udVa1q6Azhb3pfM-Ez0-U3wg97Fe00Za0RdCrZBHx4UUvy5QJ71wWcL42gCxCVrzjitheQtL-j7f9B9XFIoX3WiymxKhhv07mlKj7n8KW0B2ArYFHNO0D8ilOjfLaTXFtKlhfSphfSxOPHVKRc47CD9jf0fr19Idsp2 |
Cites_doi | 10.1016/j.neuroimage.2020.116767 10.1016/j.tins.2017.02.004 10.4172/2161-0487.1000197 10.3389/fpsyg.2012.00320 10.1016/j.brainres.2022.148198 10.1038/nn.3063 10.1093/cercor/bhs031 10.1016/j.cub.2017.11.033 10.1016/j.cub.2015.08.030 10.1016/j.bandl.2023.105301 10.1523/JNEUROSCI.5297-12.2013 10.3389/fnhum.2014.00311 10.1093/med/9780197584521.003.0020 10.1007/978-1-4614-2314-0_9 10.1016/j.tics.2014.04.012 10.1523/JNEUROSCI.1828-18.2019 10.1093/cercor/bhac502 10.1016/j.cub.2018.01.080 10.1162/jocn_a_02016 10.1523/JNEUROSCI.0964-22.2023 10.1523/JNEUROSCI.3484-14.2015 10.1371/journal.pone.0078758 10.1088/1741-2560/12/3/031001 10.1177/155005940904000305 10.1073/pnas.1508431112 10.1016/j.cub.2017.11.071 10.1155/2011/156869 10.1002/hbm.26503 10.1152/jn.00269.2022 10.1152/jn.00251.2010 10.1109/EMBC.2015.7318757 10.1016/j.neuron.2007.06.004 10.1371/journal.pbio.1000445 10.1371/journal.pbio.2004473 10.1152/jn.00297.2011 10.1553/etna_vol52s270 10.3389/fnins.2021.646543 10.1097/AUD.0000000000000923 10.1016/S0079-6123(06)59007-7 10.1007/s10548-024-01081-z 10.1016/j.neuroimage.2020.116558 10.1111/j.1460-9568.2009.07055.x 10.1016/j.heares.2023.108785 10.1016/j.jneumeth.2007.03.024 10.1016/j.jneumeth.2003.10.009 10.1016/j.neuroimage.2013.06.035 10.1016/j.compbiomed.2007.12.001 10.1016/j.cub.2024.06.072 10.1111/j.1469-8986.2003.00141.x |
ContentType | Journal Article |
Copyright | The Author(s) 2025 2025. The Author(s). The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. The Author(s) 2025 2025 |
Copyright_xml | – notice: The Author(s) 2025 – notice: 2025. The Author(s). – notice: The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: The Author(s) 2025 2025 |
DBID | C6C AAYXX CITATION CGR CUY CVF ECM EIF NPM 3V. 7X7 7XB 88A 88E 88I 8FE 8FH 8FI 8FJ 8FK ABUWG AEUYN AFKRA AZQEC BBNVY BENPR BHPHI CCPQU DWQXO FYUFA GHDGH GNUQQ HCIFZ K9. LK8 M0S M1P M2P M7P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQGLB PQQKQ PQUKI PRINS Q9U 7X8 5PM DOA |
DOI | 10.1038/s41598-025-12135-y |
DatabaseName | Springer Nature OA Free Journals CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed ProQuest Central (Corporate) Health & Medical Collection ProQuest Central (purchase pre-March 2016) Biology Database (Alumni Edition) Medical Database (Alumni Edition) Science Database (Alumni Edition) ProQuest SciTech Collection ProQuest Natural Science Journals Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest One Sustainability ProQuest Central UK/Ireland ProQuest Central Essentials Biological Science Collection ProQuest Central Natural Science Collection ProQuest One ProQuest Central Korea Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Central Student SciTech Premium Collection ProQuest Health & Medical Complete (Alumni) Biological Sciences ProQuest Health & Medical Collection Medical Database Science Database Biological Science Database ProQuest Central Premium ProQuest One Academic (New) Publicly Available Content Database ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China ProQuest Central Basic MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Publicly Available Content Database ProQuest Central Student ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest One Health & Nursing ProQuest Natural Science Collection ProQuest Central China ProQuest Biology Journals (Alumni Edition) ProQuest Central ProQuest One Applied & Life Sciences ProQuest One Sustainability ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) Natural Science Collection ProQuest Central Korea Health & Medical Research Collection Biological Science Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest Science Journals (Alumni Edition) ProQuest Biological Science Collection ProQuest Central Basic ProQuest Science Journals ProQuest One Academic Eastern Edition ProQuest Hospital Collection Health Research Premium Collection (Alumni) Biological Science Database ProQuest SciTech Collection ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | CrossRef Publicly Available Content Database MEDLINE - Academic MEDLINE |
Database_xml | – sequence: 1 dbid: C6C name: Springer Nature Open Access Journals url: http://www.springeropen.com/ sourceTypes: Publisher – sequence: 2 dbid: DOA name: DOAJ - Directory of Open Access Journals (Some content may be blocked by TCTC IT security protocols) url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 3 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 4 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 5 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Biology |
EISSN | 2045-2322 |
EndPage | 16 |
ExternalDocumentID | oai_doaj_org_article_9fec59766e7e4ba1932590dd883dbbc1 PMC12274610 40681689 10_1038_s41598_025_12135_y |
Genre | Journal Article |
GrantInformation_xml | – fundername: Technische Universität München (1025) |
GroupedDBID | 0R~ 4.4 53G 5VS 7X7 88E 88I 8FE 8FH 8FI 8FJ AAFWJ AAJSJ AAKDD AASML ABDBF ABUWG ACGFS ACUHS ADBBV ADRAZ AENEX AEUYN AFKRA AFPKN ALIPV ALMA_UNASSIGNED_HOLDINGS AOIJS AZQEC BAWUL BBNVY BCNDV BENPR BHPHI BPHCQ BVXVI C6C CCPQU DIK DWQXO EBD EBLON EBS ESX FYUFA GNUQQ GROUPED_DOAJ GX1 HCIFZ HH5 HMCUK HYE KQ8 LK8 M1P M2P M7P M~E NAO OK1 PHGZM PHGZT PIMPY PPXIY PQGLB PQQKQ PROAC PSQYO RNT RNTTT RPM SNYQT UKHRP AAYXX CITATION PJZUB CGR CUY CVF ECM EIF NPM 3V. 7XB 88A 8FK K9. M48 PKEHL PQEST PQUKI PRINS Q9U 7X8 5PM PUEGO |
ID | FETCH-LOGICAL-c494t-3a197f55799b57a18545bc19c8f19c1f0282951ba1d2ba049974ea2c79a3bac3 |
IEDL.DBID | C6C |
ISSN | 2045-2322 |
IngestDate | Wed Aug 27 01:27:28 EDT 2025 Thu Aug 21 18:25:46 EDT 2025 Mon Jul 21 01:47:17 EDT 2025 Sat Aug 23 13:25:55 EDT 2025 Wed Jul 23 01:41:47 EDT 2025 Thu Jul 24 02:16:44 EDT 2025 Sat Jul 19 01:10:18 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Keywords | Speech Phase EEG Classification |
Language | English |
License | 2025. The Author(s). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c494t-3a197f55799b57a18545bc19c8f19c1f0282951ba1d2ba049974ea2c79a3bac3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0002-1735-1048 |
OpenAccessLink | https://www.nature.com/articles/s41598-025-12135-y |
PMID | 40681689 |
PQID | 3231323719 |
PQPubID | 2041939 |
PageCount | 16 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_9fec59766e7e4ba1932590dd883dbbc1 pubmedcentral_primary_oai_pubmedcentral_nih_gov_12274610 proquest_miscellaneous_3231645383 proquest_journals_3231323719 pubmed_primary_40681689 crossref_primary_10_1038_s41598_025_12135_y springer_journals_10_1038_s41598_025_12135_y |
PublicationCentury | 2000 |
PublicationDate | 2025-07-18 |
PublicationDateYYYYMMDD | 2025-07-18 |
PublicationDate_xml | – month: 07 year: 2025 text: 2025-07-18 day: 18 |
PublicationDecade | 2020 |
PublicationPlace | London |
PublicationPlace_xml | – name: London – name: England |
PublicationTitle | Scientific reports |
PublicationTitleAbbrev | Sci Rep |
PublicationTitleAlternate | Sci Rep |
PublicationYear | 2025 |
Publisher | Nature Publishing Group UK Nature Publishing Group Nature Portfolio |
Publisher_xml | – name: Nature Publishing Group UK – name: Nature Publishing Group – name: Nature Portfolio |
References | KB Doelling (12135_CR7) 2014; 85 E Maris (12135_CR47) 2007; 164 S Romero (12135_CR45) 2008; 38 R Oostenveld (12135_CR46) 2011; 2011 12135_CR37 S Slaats (12135_CR11) 2023; 43 GM Di Liberto (12135_CR39) 2015; 25 A Malekmohammadi (12135_CR41) 2023; 129 12135_CR40 Á Ní Choisdealbha (12135_CR38) 2023; 243 CA Joyce (12135_CR44) 2004; 41 VBD Nederlanden (12135_CR18) 2020; 214 C Xu (12135_CR19) 2021; 15 MP Broderick (12135_CR26) 2018; 28 N Ding (12135_CR15) 2012; 107 MJ Henry (12135_CR22) 2013; 8 AL Giraud (12135_CR4) 2012; 15 T Van Hirtum (12135_CR32) 2023; 434 12135_CR36 12135_CR34 12135_CR1 MF Howard (12135_CR33) 2010; 104 12135_CR6 T Kirschstein (12135_CR3) 2009; 40 H Luo (12135_CR5) 2007; 54 JA Urigüen (12135_CR35) 2015; 12 N Ding (12135_CR31) 2013; 33 N Chalas (12135_CR13) 2023; 33 B Zoefel (12135_CR8) 2018; 28 12135_CR17 A Gil (12135_CR49) 2020; 52 KB Doelling (12135_CR16) 2015 12135_CR23 NJ Zuk (12135_CR14) 2020; 210 JF Cavanagh (12135_CR24) 2014; 18 JR McHaney (12135_CR29) 2021; 42 B Zoefel (12135_CR9) 2015; 35 G Mai (12135_CR12) 2023; 44 A Malekmohammadi (12135_CR48) 2025; 38 O Etard (12135_CR27) 2019; 39 A Delorme (12135_CR43) 2004; 134 BSW Ng (12135_CR21) 2013; 23 A Malekmohammadi (12135_CR42) 2023; 1800 L Riecke (12135_CR28) 2018; 28 EC Lalor (12135_CR25) 2010; 31 H Luo (12135_CR20) 2010; 8 MX Cohen (12135_CR2) 2017; 40 A Keitel (12135_CR30) 2018; 16 Y Mohammadi (12135_CR10) 2023; 35 |
References_xml | – volume: 214 start-page: 116767 year: 2020 ident: 12135_CR18 publication-title: Neuroimage doi: 10.1016/j.neuroimage.2020.116767 – volume: 40 start-page: 208 issue: 4 year: 2017 ident: 12135_CR2 publication-title: Trends Neurosci. doi: 10.1016/j.tins.2017.02.004 – ident: 12135_CR36 doi: 10.4172/2161-0487.1000197 – ident: 12135_CR6 doi: 10.3389/fpsyg.2012.00320 – volume: 1800 year: 2023 ident: 12135_CR42 publication-title: Brain Res. doi: 10.1016/j.brainres.2022.148198 – volume: 15 start-page: 511 issue: 4 year: 2012 ident: 12135_CR4 publication-title: Nat. Neurosci. doi: 10.1038/nn.3063 – volume: 23 start-page: 389 issue: 2 year: 2013 ident: 12135_CR21 publication-title: Cereb. Cortex. doi: 10.1093/cercor/bhs031 – volume: 28 start-page: 161 issue: 2 year: 2018 ident: 12135_CR28 publication-title: Curr. Biol. doi: 10.1016/j.cub.2017.11.033 – volume: 25 start-page: 2457 issue: 19 year: 2015 ident: 12135_CR39 publication-title: Curr. Biol. doi: 10.1016/j.cub.2015.08.030 – volume: 243 year: 2023 ident: 12135_CR38 publication-title: Brain Lang. doi: 10.1016/j.bandl.2023.105301 – volume: 33 start-page: 5728 issue: 13 year: 2013 ident: 12135_CR31 publication-title: J. Neurosci. doi: 10.1523/JNEUROSCI.5297-12.2013 – ident: 12135_CR23 doi: 10.3389/fnhum.2014.00311 – ident: 12135_CR1 doi: 10.1093/med/9780197584521.003.0020 – ident: 12135_CR17 doi: 10.1007/978-1-4614-2314-0_9 – volume: 18 start-page: 414 issue: 8 year: 2014 ident: 12135_CR24 publication-title: Trends Cogn. Sci. doi: 10.1016/j.tics.2014.04.012 – volume: 39 start-page: 5750 issue: 29 year: 2019 ident: 12135_CR27 publication-title: J. Neurosci. doi: 10.1523/JNEUROSCI.1828-18.2019 – volume: 33 start-page: 6273 issue: 10 year: 2023 ident: 12135_CR13 publication-title: Cereb. Cortex doi: 10.1093/cercor/bhac502 – volume: 28 start-page: 803 issue: 5 year: 2018 ident: 12135_CR26 publication-title: Curr. Biol. doi: 10.1016/j.cub.2018.01.080 – volume: 35 start-page: 1301 issue: 8 year: 2023 ident: 12135_CR10 publication-title: J. Cogn. Neurosci. doi: 10.1162/jocn_a_02016 – volume: 43 start-page: 4867 issue: 26 year: 2023 ident: 12135_CR11 publication-title: J. Neurosci. doi: 10.1523/JNEUROSCI.0964-22.2023 – volume: 35 start-page: 1954 issue: 5 year: 2015 ident: 12135_CR9 publication-title: J. Neurosci. doi: 10.1523/JNEUROSCI.3484-14.2015 – volume: 8 issue: 10 year: 2013 ident: 12135_CR22 publication-title: PLoS One doi: 10.1371/journal.pone.0078758 – volume: 12 start-page: 031001 issue: 3 year: 2015 ident: 12135_CR35 publication-title: J. Neural Eng. doi: 10.1088/1741-2560/12/3/031001 – volume: 40 start-page: 146 issue: 3 year: 2009 ident: 12135_CR3 publication-title: Clin. EEG Neurosci. doi: 10.1177/155005940904000305 – year: 2015 ident: 12135_CR16 publication-title: Proc. Natl. Acad. Sci. U. S. A. doi: 10.1073/pnas.1508431112 – volume: 28 start-page: 401 issue: 3 year: 2018 ident: 12135_CR8 publication-title: Curr. Biol. doi: 10.1016/j.cub.2017.11.071 – volume: 2011 start-page: 1 year: 2011 ident: 12135_CR46 publication-title: Comput. Intell. Neurosci. doi: 10.1155/2011/156869 – volume: 44 start-page: 6149 issue: 17 year: 2023 ident: 12135_CR12 publication-title: Hum. Brain. Mapp. doi: 10.1002/hbm.26503 – volume: 129 start-page: 1344 issue: 6 year: 2023 ident: 12135_CR41 publication-title: J. Neurophysiol. doi: 10.1152/jn.00269.2022 – volume: 104 start-page: 2500 issue: 5 year: 2010 ident: 12135_CR33 publication-title: J. Neurophysiol. doi: 10.1152/jn.00251.2010 – ident: 12135_CR34 doi: 10.1109/EMBC.2015.7318757 – volume: 54 start-page: 1001 issue: 6 year: 2007 ident: 12135_CR5 publication-title: Neuron doi: 10.1016/j.neuron.2007.06.004 – volume: 8 issue: 8 year: 2010 ident: 12135_CR20 publication-title: PLoS Biol. doi: 10.1371/journal.pbio.1000445 – volume: 16 issue: 3 year: 2018 ident: 12135_CR30 publication-title: PLoS Biol. doi: 10.1371/journal.pbio.2004473 – volume: 107 start-page: 78 issue: 1 year: 2012 ident: 12135_CR15 publication-title: J. Neurophysiol. doi: 10.1152/jn.00297.2011 – volume: 52 start-page: 270 year: 2020 ident: 12135_CR49 publication-title: Etna doi: 10.1553/etna_vol52s270 – volume: 15 start-page: 646543 year: 2021 ident: 12135_CR19 publication-title: Front. Neurosci. doi: 10.3389/fnins.2021.646543 – volume: 42 start-page: 343 issue: 2 year: 2021 ident: 12135_CR29 publication-title: Ear Hear. doi: 10.1097/AUD.0000000000000923 – ident: 12135_CR40 doi: 10.1016/S0079-6123(06)59007-7 – volume: 38 start-page: 2 issue: 1 year: 2025 ident: 12135_CR48 publication-title: Brain Topogr. doi: 10.1007/s10548-024-01081-z – volume: 210 start-page: 116558 year: 2020 ident: 12135_CR14 publication-title: Neuroimage doi: 10.1016/j.neuroimage.2020.116558 – volume: 31 start-page: 189 issue: 1 year: 2010 ident: 12135_CR25 publication-title: Eur. J. Neurosci. doi: 10.1111/j.1460-9568.2009.07055.x – volume: 434 year: 2023 ident: 12135_CR32 publication-title: Hear. Res. doi: 10.1016/j.heares.2023.108785 – volume: 164 start-page: 177 issue: 1 year: 2007 ident: 12135_CR47 publication-title: J. Neurosci. Methods doi: 10.1016/j.jneumeth.2007.03.024 – volume: 134 start-page: 9 issue: 1 year: 2004 ident: 12135_CR43 publication-title: J. Neurosci. Methods doi: 10.1016/j.jneumeth.2003.10.009 – volume: 85 start-page: 761 year: 2014 ident: 12135_CR7 publication-title: Neuroimage doi: 10.1016/j.neuroimage.2013.06.035 – volume: 38 start-page: 348 issue: 3 year: 2008 ident: 12135_CR45 publication-title: Comput. Biol. Med. doi: 10.1016/j.compbiomed.2007.12.001 – ident: 12135_CR37 doi: 10.1016/j.cub.2024.06.072 – volume: 41 start-page: 313 issue: 2 year: 2004 ident: 12135_CR44 publication-title: Psychophysiology doi: 10.1111/j.1469-8986.2003.00141.x |
SSID | ssj0000529419 |
Score | 2.4522257 |
Snippet | This study examines the consistency of
cross-subject
electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification.... This study examines the consistency of cross-subject electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification.... Abstract This study examines the consistency of cross-subject electroencephalography (EEG) phase tracking in response to auditory stimuli via speech... |
SourceID | doaj pubmedcentral proquest pubmed crossref springer |
SourceType | Open Website Open Access Repository Aggregation Database Index Database Publisher |
StartPage | 26174 |
SubjectTerms | 631/378/116 631/378/2619 631/378/3917 Accuracy Acoustic Stimulation Acoustics Adult Audiobooks Auditory stimuli Classification EEG Electrodes Electroencephalography Electroencephalography - methods Female Humanities and Social Sciences Humans Male multidisciplinary Phase Science Science (multidisciplinary) Speech Speech - physiology Speech Perception - physiology Standard scores Training Young Adult |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LixQxEC5kQfAivm1dJYI3DTvpJNPJUWXWRdCLK-wtJOk006A9wzyE-fdWJT3jjg-8eOlDp5sO9VWor7qSrwBetjYIZMmKT4KXXKXW8JBS4N54GWLorMi_Lj5-ml58UR-u9NW1Vl-0J6zIAxfDndkuRSS902lqkgqe-Ia2k7Y1RrYhxJz4YMy7lkwVVe_aKmHHUzITac7WGKnoNFmtOamYab47ikRZsP9PLPP3zZK_VExzIDq_A7dHBsnelJnfhRtpuAc3S0_J3X1YfV6mFOdsuaISDJmdLTrm2VfCc0gr9r33bDZ7zyl-tSwSe6btQhkhNrbtYettoB80vD90yd2w5RzfYFTA77_1mBAjf2e5kc4DuDyfXb674GNjBR6VVRsu0YxNp3VjbdCNx5CtNJrSRtPhRXS5vKoFWrutg6ekqFHJ17GxCKCP8iGcDIshPQaG_CHiiJgkkRRmHyZ2Br8hmxiRWURVwau9jd2yyGe4XPaWxhVEHCLiMiJuV8FbguHwJElf5xvoEG50CPcvh6jgdA-iG9fj2smaJCplI2wFLw7DuJKoPOKHtNiWZ6YKA4Cs4FHB_DATpD3UoATfNkfecDTV45Ghn2e1blFj4o8ktYLXe8f5Oa-_2-LJ_7DFU7hVk8eTEKg5hZPNapueIYnahOd5vfwAuLcbJQ priority: 102 providerName: Directory of Open Access Journals – databaseName: Health & Medical Collection dbid: 7X7 link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3dj9QgECd6xsQX47c9T4OJb0puKXSBJ6Nmz4uJvngm-0aAUrfJ2dbu7iX73ztDu72sXy99KJACP-j8ZgZmCHlVGs-BJUs2804wGUvNfIyeOe2ED74yPJkuPn-Zn3-Tn5bFcjS4rcdjlft_YvpRl21AG_mpyDHIoFDcvO1-Mswahd7VMYXGTXILQ5fhkS61VJONBb1YkpvxrsxM6NM1yCu8U5YXDGOZFWx3II9S2P6_cc0_j0z-5jdN4ujsHrk78kj6bgD-PrkRmwfk9pBZcveQ9F-7GMOKdj06YnDyaVtRRy8R1Sb29Kp2dLH4yFCKlTQgh8ZDQwknOibvoeutRzMNq6dcuRvaraAFRTd-_aMGtRhYPE3pdB6Ri7PFxYdzNqZXYEEauWHCcaOqolDG-EI5ENyy8IGboCt48Co5WQvuHS9z71A1UjK6PCgDMLogHpOjpm3iU0KBRQQo4bPIowQdRIdKwzeECgH4RZAZeb2fY9sNQTRscn4LbQdELCBiEyJ2l5H3CMNUEwNgpxdt_92O-8maKgbQhebzqKKELgINLcysLLUWpYdRZORkD6Idd-XaXq-hjLycimE_oZPENbHdDnXmEsSAyMiTAfOpJ0B-ME0JtNYHq-Ggq4clTb1KMbt5Duo_UNWMvNkvnOt-_Xsujv8_jGfkTo5rGQN96hNytOm38TmQpI1_kXbCL8c3Efw priority: 102 providerName: ProQuest |
Title | Speech prediction of a listener via EEG-based classification through subject-independent phase dissimilarity model |
URI | https://link.springer.com/article/10.1038/s41598-025-12135-y https://www.ncbi.nlm.nih.gov/pubmed/40681689 https://www.proquest.com/docview/3231323719 https://www.proquest.com/docview/3231645383 https://pubmed.ncbi.nlm.nih.gov/PMC12274610 https://doaj.org/article/9fec59766e7e4ba1932590dd883dbbc1 |
Volume | 15 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dixMxEB_uA8EX8dv1zhLBNw02m2w3eeyVnkfBQ7wT-rYk2awt6LZsW6H__c1kt5Xq-eBLC01CZzOTnd9kkt8AvCuNE4iSFe87K7kKpeYuBMetttJ5VxkRty4-Xw-uvqnJNJseQbq7CxMP7UdKy_ia3p0O-7hCR0OXwdKMEwlZxrfHcErU7WTVo8Fov69CmSslTHc_pi_1PUMPfFCk6r8PX_59TPKPXGl0QZeP4VGHHdmwlfYJHIX6KTxoq0lun0FzswzBz9iyoeQLTThbVMyyH6TJOjTs19yy8fgTJ89VMk-4mQ4KRd2wrmAPW20cbc3w-b4-7potZziCUep-_nOOoTAidxZL6DyH28vx7eiKdyUVuFdGrbm0wuRVluXGuCy36KxV5rwwXlf4IaqYWM2Es6JMnaVwKFfBpj43qDrr5Qs4qRd1eAUMkYPHFtEPIiiMO7SvNP6HzL1HTOFVAu93c1wsW-KMIia8pS5ajRSokSJqpNgmcEFq2Pck0uv4w6L5XnRGUJgqeIx_BoOQB4UiIvTMTL8stZalw6dI4HynxKJbiatCpkROKXNhEni7b8Y1RIkRW4fFpu0zUPjqlwm8bHW-lwQBD5UmwdH6wBoORD1sqeezyNMtUgz5EZ4m8GFnOL_l-vdcvP6_7mfwMCXbJrJPfQ4n62YT3iBQWrseHOfTvAenw-HkZoLfF-PrL197cb304ubDHWCxFVw |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwELdGJwQviM8RGGAkeAJrdew09gNCDDo6tlUIhrQ3y3YcWgnS0g9Q_yj-R-6cpFP5ettLH-IkvfjOvt_57PsR8qTQjgNKlqzrrGAyFIq5EByzygrnXal5XLo4GfYGn-S7s-xsi_xsz8Lgtsp2TowTdTHxuEa-J1IsMihyrl9OvzFkjcLsakuhUZvFUVj9gJBt_uLwDej3aZoe9E9fD1jDKsC81HLBhOU6L7Ms19pluQV_JTPnufaqhB9extxixp3lReosRgS5DDb1uQbprRfw2ktkWwqIZDpke78_fP9hvaiDaTPJdXM4pyvU3hwcJB5iSzOGxdMyttpwgJEn4G_g9s89mr8laqP_O7hOrjXAlb6qLe0G2QrVTXK5prJc3SKzj9MQ_IhOZ5j5QW3TSUkt_YJmVIUZ_T62tN9_y9BtFtQjaMddStEwaMMWROdLh-tCbLwm513Q6QieoLhvYPx1DHE4hA008vfcJqcX0fN3SKeaVOEuoQBbPLTwbuBBQtCjfKngP0TuPQAaLxPyrO1jM62rdpiYbRfK1BoxoBETNWJWCdlHNazvxIrb8cJk9tk0A9joMngIvnq9kAcJIgLuzXS3KJQShYOvSMhuq0TTTANzc260CXm8boYBjFkZW4XJsr6nJ8HviITs1DpfSwJoC3lR4Gm1YQ0bom62VONRLBLO0zTHWvoJed4azrlc_-6Le___jEfkyuD05NgcHw6P7pOrKdo1VhlVu6SzmC3DA0BoC_ewGReUmAseib8AdaNO6w |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lj9MwELaWRSAuiOcSWMBIcAKrdezU8QEhYFt2WVghsUi9Wbbj0EqQlD5A_Wn8O2acpKvyuu0lhzgP2zPj-cYzniHkcaEdB5QsWd9ZwWQocuZCcMzmVjjvSs3j1sX7k8HhJ_l2nI13yM_uLAyGVXZrYlyoi9rjHnlPpJhkUCiue2UbFvHhYPRi9o1hBSn0tHblNBoWOQ7rH2C-LZ4fHQCtn6TpaHj6-pC1FQaYl1oumbBcqzLLlNYuUxZ0l8yc59rnJVx4Gf2MGXeWF6mzaB0oGWzqlYaRWC_gsxfIRSUyjiKmxmqzvYMONMl1e0ynL_LeAlQlHmdLM4Zp1DK23lKFsWLA32Dun9Gav7lsoyYcXSNXWwhLXzY8d53shOoGudQUtVzfJPOPsxD8hM7m6ANCutO6pJZ-QYaqwpx-n1o6HL5hqEAL6hG-Y7xSZBHa1g2ii5XDHSI23ZTpXdLZBN6gGEEw_ToFixwMCBor-dwip-cx77fJblVX4Q6hAGA8tPB-4EGC-ZP7Mod_COU9QBsvE_K0m2Mza_J3mOh3F7lpKGKAIiZSxKwT8grJsHkSc2_HG_X8s2lF2egyeDDDBoOggoQuAgLOdL8o8lwUDkaRkP2OiKZdEBbmjH0T8mjTDKKM_hlbhXrVPDOQoIFEQvYamm96ArgLK6TA2_kWN2x1dbulmk5iunCepgqz6ifkWcc4Z_3691zc_f8wHpLLIH_m3dHJ8T1yJUW2xnSj-T7ZXc5X4T5AtaV7EIWCEnPOQvgLTdxRuw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Speech+prediction+of+a+listener+via+EEG-based+classification+through+subject-independent+phase+dissimilarity+model&rft.jtitle=Scientific+reports&rft.au=Malekmohammadi%2C+Alireza&rft.au=Rauschecker%2C+Josef+P&rft.au=Cheng%2C+Gordon&rft.date=2025-07-18&rft.issn=2045-2322&rft.eissn=2045-2322&rft.volume=15&rft.issue=1&rft.spage=26174&rft_id=info:doi/10.1038%2Fs41598-025-12135-y&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2045-2322&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2045-2322&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2045-2322&client=summon |