Speech prediction of a listener via EEG-based classification through subject-independent phase dissimilarity model
This study examines the consistency of cross-subject electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification. Repeated listening to audio induces consistent EEG phase alignments across trials for listeners. If the phase of EEG aligns more closely with ac...
Saved in:
Published in | Scientific reports Vol. 15; no. 1; pp. 26174 - 16 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
London
Nature Publishing Group UK
18.07.2025
Nature Publishing Group Nature Portfolio |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This study examines the consistency of
cross-subject
electroencephalography (EEG) phase tracking in response to auditory stimuli via speech classification. Repeated listening to audio induces consistent EEG phase alignments across trials for listeners. If the phase of EEG aligns more closely with acoustics,
cross-subject
EEG phase tracking should also exhibit significant similarity. To test this hypothesis, we propose a generalized subject-independent phase dissimilarity model, which eliminates the requirement for training on individuals. Our proposed model assesses the duration and number of
cross-subject
EEG-phase-alignments, influencing accuracy. EEG responses were recorded from seventeen participants who listened three times to 22 unfamiliar one-minute passages from audiobooks. Our findings demonstrate that the EEG phase is consistent within repeated
cross-subject
trials. Our model achieved an impressive EEG-based classification accuracy of 74.96%. Furthermore, an average of nine distinct phasic templates from different participants is sufficient to effectively train the model, regardless of the duration of EEG phase alignments. Additionally, the duration of EEG-phase-alignments positively correlates with classification accuracy. These results indicate that predicting a listener’s speech is feasible by training the model with phasic templates from other listeners, owing to the consistent
cross-subject
EEG phase alignments with speech acoustics. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 2045-2322 2045-2322 |
DOI: | 10.1038/s41598-025-12135-y |