The Surprising Effectiveness of Linear Unsupervised Image-to-Image Translation

Unsupervised image-to-image translation is an inherently ill-posed problem. Recent methods based on deep encoder-decoder architectures have shown impressive results, but we show that they only succeed due to a strong locality bias, and they fail to learn very simple nonlocal transformations (e.g. ma...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Richardson, Eitan, Weiss, Yair
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 24.07.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Unsupervised image-to-image translation is an inherently ill-posed problem. Recent methods based on deep encoder-decoder architectures have shown impressive results, but we show that they only succeed due to a strong locality bias, and they fail to learn very simple nonlocal transformations (e.g. mapping upside down faces to upright faces). When the locality bias is removed, the methods are too powerful and may fail to learn simple local transformations. In this paper we introduce linear encoder-decoder architectures for unsupervised image to image translation. We show that learning is much easier and faster with these architectures and yet the results are surprisingly effective. In particular, we show a number of local problems for which the results of the linear methods are comparable to those of state-of-the-art architectures but with a fraction of the training time, and a number of nonlocal problems for which the state-of-the-art fails while linear methods succeed.
ISSN:2331-8422