Adversarial Learning With Knowledge of Image Classification for Improving GANs
Generating realistic images with fine details are still challenging due to difficulties of training GANs and mode collapse. To resolve this problem, our main idea is that leveraging the knowledge of an image classification network, which is pre-trained by a large scale dataset ( e.g. ImageNet), woul...
Saved in:
Published in | IEEE access Vol. 7; pp. 56591 - 56605 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Generating realistic images with fine details are still challenging due to difficulties of training GANs and mode collapse. To resolve this problem, our main idea is that leveraging the knowledge of an image classification network, which is pre-trained by a large scale dataset ( e.g. ImageNet), would improve a GAN. By using the gradient of the network ( i.e. discriminator) with high discriminability during training, we can, therefore, guide the gradient of a generator gradually toward the real data region. However, excessive negative feedback of the powerful classifier often prevents a generator from producing diverse images. Based on the main idea, we design a GAN including the added discriminator and propose a novel energy function in order to transfer the pre-trained knowledge to a generator and control the feedback of the added discriminator. We also present an incremental learning method to prevent the density of the generator to be the low-entropy distribution when training our GAN with respect to the energy function. We incorporate our method to DCGAN and demonstrate the ability to enhance the image quality even in high resolution on several datasets compared to DCGAN. In addition, we compare our method with recent GANs for the diversity of generated samples on CIFAR-10 and STL-10 datasets and provide the extensive ablation studies to prove the benefits of our method. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2019.2913697 |