Different mechanisms for role relations versus verb–action congruence effects: Evidence from ERPs in picture–sentence verification

Extant accounts of visually situated language processing do make general predictions about visual context effects on incremental sentence comprehension; these, however, are not sufficiently detailed to accommodate potentially different visual context effects (such as a scene–sentence mismatch based...

Full description

Saved in:
Bibliographic Details
Published inActa psychologica Vol. 152; pp. 133 - 148
Main Authors Knoeferle, Pia, Urbach, Thomas P., Kutas, Marta
Format Journal Article
LanguageEnglish
Published Kidlington Elsevier B.V 01.10.2014
Elsevier
Subjects
Online AccessGet full text
ISSN0001-6918
1873-6297
1873-6297
DOI10.1016/j.actpsy.2014.08.004

Cover

Loading…
More Information
Summary:Extant accounts of visually situated language processing do make general predictions about visual context effects on incremental sentence comprehension; these, however, are not sufficiently detailed to accommodate potentially different visual context effects (such as a scene–sentence mismatch based on actions versus thematic role relations, e.g., (Altmann & Kamide, 2007; Knoeferle & Crocker, 2007; Taylor & Zwaan, 2008; Zwaan & Radvansky, 1998)). To provide additional data for theory testing and development, we collected event-related brain potentials (ERPs) as participants read a subject–verb–object sentence (500ms SOA in Experiment 1 and 300ms SOA in Experiment 2), and post-sentence verification times indicating whether or not the verb and/or the thematic role relations matched a preceding picture (depicting two participants engaged in an action). Though incrementally processed, these two types of mismatch yielded different ERP effects. Role–relation mismatch effects emerged at the subject noun as anterior negativities to the mismatching noun, preceding action mismatch effects manifest as centro-parietal N400s greater to the mismatching verb, regardless of SOAs. These two types of mismatch manipulations also yielded different effects post-verbally, correlated differently with a participant's mean accuracy, verbal working memory and visual-spatial scores, and differed in their interactions with SOA. Taken together these results clearly implicate more than a single mismatch mechanism for extant accounts of picture–sentence processing to accommodate. •Extant accounts make general claims about visual context effects on comprehension.•These accounts cannot accommodate potentially different visual context effects.•We report different ERP effects to role-relation and verb-action mismatch effects.•This suggests more than a single mismatch mechanism for picture-sentence processing.•We outline constraints for tenable accounts and provide an example instantiation.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0001-6918
1873-6297
1873-6297
DOI:10.1016/j.actpsy.2014.08.004