Latent Space Translation via Semantic Alignment

While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible. Towards a better understanding of this phenomenon, our work shows how representations learned from these neural modules...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Maiorca, Valentino, Moschella, Luca, Norelli, Antonio, Fumero, Marco, Locatello, Francesco, Rodolà, Emanuele
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 11.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible. Towards a better understanding of this phenomenon, our work shows how representations learned from these neural modules can be translated between different pre-trained networks via simpler transformations than previously thought. An advantage of this approach is the ability to estimate these transformations using standard, well-understood algebraic procedures that have closed-form solutions. Our method directly estimates a transformation between two given latent spaces, thereby enabling effective stitching of encoders and decoders without additional training. We extensively validate the adaptability of this translation procedure in different experimental settings: across various trainings, domains, architectures (e.g., ResNet, CNN, ViT), and in multiple downstream tasks (classification, reconstruction). Notably, we show how it is possible to zero-shot stitch text encoders and vision decoders, or vice-versa, yielding surprisingly good classification performance in this multimodal setting.
ISSN:2331-8422