Taming the Tail in Class-Conditional GANs: Knowledge Sharing via Unconditional Training at Lower Resolutions
Despite extensive research on training generative adversarial networks (GANs) with limited training data, learning to generate images from long-tailed training distributions remains fairly unexplored. In the presence of imbalanced multi-class training data, GANs tend to favor classes with more sampl...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.02.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Despite extensive research on training generative adversarial networks (GANs)
with limited training data, learning to generate images from long-tailed
training distributions remains fairly unexplored. In the presence of imbalanced
multi-class training data, GANs tend to favor classes with more samples,
leading to the generation of low-quality and less diverse samples in tail
classes. In this study, we aim to improve the training of class-conditional
GANs with long-tailed data. We propose a straightforward yet effective method
for knowledge sharing, allowing tail classes to borrow from the rich
information from classes with more abundant training data. More concretely, we
propose modifications to existing class-conditional GAN architectures to ensure
that the lower-resolution layers of the generator are trained entirely
unconditionally while reserving class-conditional generation for the
higher-resolution layers. Experiments on several long-tail benchmarks and GAN
architectures demonstrate a significant improvement over existing methods in
both the diversity and fidelity of the generated images. The code is available
at https://github.com/khorrams/utlo. |
---|---|
DOI: | 10.48550/arxiv.2402.17065 |