A Principled Approach to Feature Selection in Models of Sentence Processing

Among theories of human language comprehension, cue‐based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long‐distance dependencies. Most previous work in this area has assumed that very general retrieval cues lik...

Full description

Saved in:
Bibliographic Details
Published inCognitive science Vol. 44; no. 12; pp. e12918 - n/a
Main Authors Smith, Garrett, Vasishth, Shravan
Format Journal Article
LanguageEnglish
Published United States Wiley 01.12.2020
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Among theories of human language comprehension, cue‐based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long‐distance dependencies. Most previous work in this area has assumed that very general retrieval cues like [+subject] or [+singular] do the work of identifying (and sometimes misidentifying) a retrieval target in order to establish a dependency between words. However, recent work suggests that general, handpicked retrieval cues like these may not be enough to explain illusions of plausibility (Cunnings & Sturt, 2018), which can arise in sentences like The letter next to the porcelain plate shattered. Capturing such retrieval interference effects requires lexically specific features and retrieval cues, but handpicking the features is hard to do in a principled way and greatly increases modeler degrees of freedom. To remedy this, we use well‐established word embedding methods for creating distributed lexical feature representations that encode information relevant for retrieval using distributed retrieval cue vectors. We show that the similarity between the feature and cue vectors (a measure of plausibility) predicts total reading times in Cunnings and Sturt’s eye‐tracking data. The features can easily be plugged into existing parsing models (including cue‐based retrieval and self‐organized parsing), putting very different models on more equal footing and facilitating future quantitative comparisons.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0364-0213
1551-6709
1551-6709
DOI:10.1111/cogs.12918