Do Vision and Language Models Share Concepts? A Vector Space Alignment Study

Large-scale pretrained language models (LMs) are said to “lack the ability to connect utterances to the world” (Bender and Koller, ), because they do not have “mental models of the world” (Mitchell and Krakauer, ). If so, one would expect LM representations to be unrelated to representations induced...

Full description

Saved in:
Bibliographic Details
Published inTransactions of the Association for Computational Linguistics Vol. 12; pp. 1232 - 1249
Main Authors Li, Jiaang, Kementchedjhieva, Yova, Fierro, Constanza, Søgaard, Anders
Format Journal Article
LanguageEnglish
Published 255 Main Street, 9th Floor, Cambridge, Massachusetts 02142, USA MIT Press 30.09.2024
The MIT Press
Online AccessGet full text
ISSN2307-387X
2307-387X
DOI10.1162/tacl_a_00698

Cover

More Information
Summary:Large-scale pretrained language models (LMs) are said to “lack the ability to connect utterances to the world” (Bender and Koller, ), because they do not have “mental models of the world” (Mitchell and Krakauer, ). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT, and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy, and frequency. This has important implications for multi-modal processing and the LM understanding debate (Mitchell and Krakauer, ).
Bibliography:2024
ISSN:2307-387X
2307-387X
DOI:10.1162/tacl_a_00698