Mapping Distributional Semantics to Property Norms with Deep Neural Networks

Word embeddings have been very successful in many natural language processing tasks, but they characterize the meaning of a word/concept by uninterpretable “context signatures”. Such a representation can render results obtained using embeddings difficult to interpret. Neighboring word vectors may ha...

Full description

Saved in:
Bibliographic Details
Published inBig data and cognitive computing Vol. 3; no. 2; p. 30
Main Authors Li, Dandan, Summers-Stay, Douglas
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.06.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Word embeddings have been very successful in many natural language processing tasks, but they characterize the meaning of a word/concept by uninterpretable “context signatures”. Such a representation can render results obtained using embeddings difficult to interpret. Neighboring word vectors may have similar meanings, but in what way are they similar? That similarity may represent a synonymy, metonymy, or even antonymy relation. In the cognitive psychology literature, in contrast, concepts are frequently represented by their relations with properties. These properties are produced by test subjects when asked to describe important features of concepts. As such, they form a natural, intuitive feature space. In this work, we present a neural-network-based method for mapping a distributional semantic space onto a human-built property space automatically. We evaluate our method on word embeddings learned with different types of contexts, and report state-of-the-art performances on the widely used McRae semantic feature production norms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2504-2289
2504-2289
DOI:10.3390/bdcc3020030