Identifiability Results for Multimodal Contrastive Learning
Contrastive learning is a cornerstone underlying recent progress in multi-view and multimodal learning, e.g., in representation learning with image/caption pairs. While its effectiveness is not yet fully understood, a line of recent work reveals that contrastive learning can invert the data generati...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.03.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Contrastive learning is a cornerstone underlying recent progress in
multi-view and multimodal learning, e.g., in representation learning with
image/caption pairs. While its effectiveness is not yet fully understood, a
line of recent work reveals that contrastive learning can invert the data
generating process and recover ground truth latent factors shared between
views. In this work, we present new identifiability results for multimodal
contrastive learning, showing that it is possible to recover shared factors in
a more general setup than the multi-view setting studied previously.
Specifically, we distinguish between the multi-view setting with one generative
mechanism (e.g., multiple cameras of the same type) and the multimodal setting
that is characterized by distinct mechanisms (e.g., cameras and microphones).
Our work generalizes previous identifiability results by redefining the
generative process in terms of distinct mechanisms with modality-specific
latent variables. We prove that contrastive learning can block-identify latent
factors shared between modalities, even when there are nontrivial dependencies
between factors. We empirically verify our identifiability results with
numerical simulations and corroborate our findings on a complex multimodal
dataset of image/text pairs. Zooming out, our work provides a theoretical basis
for multimodal representation learning and explains in which settings
multimodal contrastive learning can be effective in practice. |
---|---|
DOI: | 10.48550/arxiv.2303.09166 |