Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations

Self-supervised representation learning often uses data augmentations to induce some invariance to "style" attributes of the data. However, with downstream tasks generally unknown at training time, it is difficult to deduce a priori which attributes of the data are indeed "style"...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Eastwood, Cian, Julius von Kügelgen, Ericsson, Linus, Bouchacourt, Diane, Vincent, Pascal, Schölkopf, Bernhard, Ibrahim, Mark
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 20.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Self-supervised representation learning often uses data augmentations to induce some invariance to "style" attributes of the data. However, with downstream tasks generally unknown at training time, it is difficult to deduce a priori which attributes of the data are indeed "style" and can be safely discarded. To deal with this, current approaches try to retain some style information by tuning the degree of invariance to some particular task, such as ImageNet object classification. However, prior work has shown that such task-specific tuning can lead to significant performance degradation on other tasks that rely on the discarded style. To address this, we introduce a more principled approach that seeks to disentangle style features rather than discard them. The key idea is to add multiple style embedding spaces where: (i) each is invariant to all-but-one augmentation; and (ii) joint entropy is maximized. We formalize our structured data-augmentation procedure from a causal latent-variable-model perspective, and prove identifiability of both content and individual style variables. We empirically demonstrate the benefits of our approach on both synthetic and real-world data.
ISSN:2331-8422