Deriving Language Models from Masked Language Models

Masked language models (MLM) do not explicitly define a distribution over language, i.e., they are not language models per se. However, recent work has implicitly treated them as such for the purposes of generation and scoring. This paper studies methods for deriving explicit joint distributions fro...

Full description

Saved in:
Bibliographic Details
Main Authors Hennigen, Lucas Torroba, Kim, Yoon
Format Journal Article
LanguageEnglish
Published 24.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Masked language models (MLM) do not explicitly define a distribution over language, i.e., they are not language models per se. However, recent work has implicitly treated them as such for the purposes of generation and scoring. This paper studies methods for deriving explicit joint distributions from MLMs, focusing on distributions over two tokens, which makes it possible to calculate exact distributional properties. We find that an approach based on identifying joints whose conditionals are closest to those of the MLM works well and outperforms existing Markov random field-based approaches. We further find that this derived model's conditionals can even occasionally outperform the original MLM's conditionals.
DOI:10.48550/arxiv.2305.15501