Measuring and Improving Consistency in Pretrained Language Models

of a model—that is, the invariance of its behavior under meaning-preserving alternations in its input—is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, w...

Full description

Saved in:
Bibliographic Details
Published inTransactions of the Association for Computational Linguistics Vol. 9; pp. 1012 - 1031
Main Authors Elazar, Yanai, Kassner, Nora, Ravfogel, Shauli, Ravichander, Abhilasha, Hovy, Eduard, Schütze, Hinrich, Goldberg, Yoav
Format Journal Article
LanguageEnglish
Published One Rogers Street, Cambridge, MA 02142-1209, USA MIT Press 01.01.2021
MIT Press Journals, The
The MIT Press
Subjects
Online AccessGet full text

Cover

Loading…