Investigating the neural correlates of continuous speech computation with frequency-tagged neuroelectric responses

In order to learn an oral language, humans have to discover words from a continuous signal. Streams of artificial monotonous speech can be readily segmented based on the statistical analysis of the syllables' distribution. This parsing is considerably improved when acoustic cues, such as sublim...

Full description

Saved in:
Bibliographic Details
Published inNeuroImage (Orlando, Fla.) Vol. 44; no. 2; pp. 509 - 519
Main Authors Buiatti, Marco, Peña, Marcela, Dehaene-Lambertz, Ghislaine
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 15.01.2009
Elsevier Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In order to learn an oral language, humans have to discover words from a continuous signal. Streams of artificial monotonous speech can be readily segmented based on the statistical analysis of the syllables' distribution. This parsing is considerably improved when acoustic cues, such as subliminal pauses, are added suggesting that a different mechanism is involved. Here we used a frequency-tagging approach to explore the neural mechanisms underlying word learning while listening to continuous speech. High-density EEG was recorded in adults listening to a concatenation of either random syllables or tri-syllabic artificial words, with or without subliminal pauses added every three syllables. Peaks in the EEG power spectrum at the frequencies of one and three syllables occurrence were used to tag the perception of a monosyllabic or tri-syllabic structure, respectively. Word streams elicited the suppression of a one-syllable frequency peak, steadily present during random streams, suggesting that syllables are no more perceived as isolated segments but bounded to adjacent syllables. Crucially, three-syllable frequency peaks were only observed during word streams with pauses, and were positively correlated to the explicit recall of the detected words. This result shows that pauses facilitate a fast, explicit and successful extraction of words from continuous speech, and that the frequency-tagging approach is a powerful tool to track brain responses to different hierarchical units of the speech structure.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1053-8119
1095-9572
DOI:10.1016/j.neuroimage.2008.09.015