Measuring and Improving Consistency in Pretrained Language Models
of a model—that is, the invariance of its behavior under meaning-preserving alternations in its input—is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, w...
Saved in:
Published in | Transactions of the Association for Computational Linguistics Vol. 9; pp. 1012 - 1031 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
One Rogers Street, Cambridge, MA 02142-1209, USA
MIT Press
01.01.2021
MIT Press Journals, The The MIT Press |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Be the first to leave a comment!