Exploring Category Structure with Contextual Language Models and Lexical Semantic Networks

Recent work on predicting category structure with distributional models, using either static word embeddings (Heyman and Heyman, 2019) or contextualized language models (CLMs) (Misra et al., 2021), report low correlations with human ratings, thus calling into question their plausibility as models of...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Renner, Joseph, Pascal, Denis, Gilleron, Rémi, Brunellière, Angèle
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 14.02.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recent work on predicting category structure with distributional models, using either static word embeddings (Heyman and Heyman, 2019) or contextualized language models (CLMs) (Misra et al., 2021), report low correlations with human ratings, thus calling into question their plausibility as models of human semantic memory. In this work, we revisit this question testing a wider array of methods for probing CLMs for predicting typicality scores. Our experiments, using BERT (Devlin et al., 2018), show the importance of using the right type of CLM probes, as our best BERT-based typicality prediction methods substantially improve over previous works. Second, our results highlight the importance of polysemy in this task: our best results are obtained when using a disambiguation mechanism. Finally, additional experiments reveal that Information Contentbased WordNet (Miller, 1995), also endowed with disambiguation, match the performance of the best BERT-based method, and in fact capture complementary information, which can be combined with BERT to achieve enhanced typicality predictions.
ISSN:2331-8422