ReMix: Towards Image-to-Image Translation with Limited Data
Image-to-image (I2I) translation methods based on generative adversarial networks (GANs) typically suffer from overfitting when limited training data is available. In this work, we propose a data augmentation method (ReMix) to tackle this issue. We interpolate training samples at the feature level a...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
31.03.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Image-to-image (I2I) translation methods based on generative adversarial
networks (GANs) typically suffer from overfitting when limited training data is
available. In this work, we propose a data augmentation method (ReMix) to
tackle this issue. We interpolate training samples at the feature level and
propose a novel content loss based on the perceptual relations among samples.
The generator learns to translate the in-between samples rather than memorizing
the training set, and thereby forces the discriminator to generalize. The
proposed approach effectively reduces the ambiguity of generation and renders
content-preserving results. The ReMix method can be easily incorporated into
existing GAN models with minor modifications. Experimental results on numerous
tasks demonstrate that GAN models equipped with the ReMix method achieve
significant improvements. |
---|---|
DOI: | 10.48550/arxiv.2103.16835 |