Cross-modal interactions at the audiovisual cocktail-party revealed by behavior, ERPs, and neural oscillations

Theories of attention argue that objects are the units of attentional selection. In real-word environments such objects can contain visual and auditory features. To understand how mechanisms of selective attention operate in multisensory environments, we created an audiovisual cocktail-party situati...

Full description

Saved in:
Bibliographic Details
Published inbioRxiv
Main Authors Laura-Isabelle Klatt, Begau, Alexandra, Schneider, Daniel, Wascher, Edmund, Getzmann, Stephan
Format Paper
LanguageEnglish
Published Cold Spring Harbor Cold Spring Harbor Laboratory Press 01.10.2022
Cold Spring Harbor Laboratory
Edition1.1
Subjects
Online AccessGet full text
ISSN2692-8205
2692-8205
DOI10.1101/2022.09.30.510236

Cover

More Information
Summary:Theories of attention argue that objects are the units of attentional selection. In real-word environments such objects can contain visual and auditory features. To understand how mechanisms of selective attention operate in multisensory environments, we created an audiovisual cocktail-party situation, in which two speakers (left and right of fixation) simultaneously articulated brief numerals. In three separate blocks, informative auditory speech was presented (a) alone or paired with (b) congruent or (c) uninformative visual speech. In all blocks, subjects localized a pre-defined numeral. While audiovisual-congruent and uninformative speech improved response times and speed of information uptake according to diffusion modeling, an ERP analysis revealed that this did not coincide with enhanced attentional engagement. Yet, consistent with object-based attentional selection, the deployment of auditory spatial attention (N2ac) was accompanied by visuo-spatial attentional orienting (N2pc) irrespective of the informational content of visual speech. Notably, an N2pc component was absent in the auditory-only condition, demonstrating that a sound-induced shift of visuo-spatial attention relies on the availability of audio-visual features evolving coherently in time. Additional analyses revealed cross-modal interactions in working memory and modulations of cognitive control. The preregistered methods and hypotheses of this study can be found at https://osf.io/vh38g. Competing Interest Statement The authors have declared no competing interest.
Bibliography:SourceType-Working Papers-1
ObjectType-Working Paper/Pre-Print-1
content type line 50
Competing Interest Statement: The authors have declared no competing interest.
ISSN:2692-8205
2692-8205
DOI:10.1101/2022.09.30.510236