Cycle-object consistency for image-to-image domain adaptation

•In this paper, for the first time, we introduce an instance-aware GAN framework, AugGAN-Det, to jointly train a generator with an object detector (for image-object style) and a discriminator (for global style).•As to the previous instance-aware GAN models, our model internalizes global and object s...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 138; p. 109416
Main Authors Lin, Che-Tsung, Kew, Jie-Long, Chan, Chee Seng, Lai, Shang-Hong, Zach, Christopher
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•In this paper, for the first time, we introduce an instance-aware GAN framework, AugGAN-Det, to jointly train a generator with an object detector (for image-object style) and a discriminator (for global style).•As to the previous instance-aware GAN models, our model internalizes global and object style-transfer without using a detector to extract and explicitly aligning the instance features between the original and the transformed images.•Extensive experimental results demonstrate that our model achieves state-of-the-art performance across different weathers and time-of-day on INIT, GTA, and BDD100k datasets. Recent advances in generative adversarial networks (GANs) have been proven effective in performing domain adaptation for object detectors through data augmentation. While GANs are exceptionally successful, those methods that can preserve objects well in the image-to-image translation task usually require an auxiliary task, such as semantic segmentation to prevent the image content from being too distorted. However, pixel-level annotations are difficult to obtain in practice. Alternatively, instance-aware image-translation model treats object instances and background separately. Yet, it requires object detectors at test time, assuming that off-the-shelf detectors work well in both domains. In this work, we present AugGAN-Det, which introduces Cycle-object Consistency (CoCo) loss to generate instance-aware translated images across complex domains. The object detector of the target domain is directly leveraged in generator training and guides the preserved objects in the translated images to carry target-domain appearances. Compared to previous models, which e.g., require pixel-level semantic segmentation to force the latent distribution to be object-preserving, this work only needs bounding box annotations which are significantly easier to acquire. Next, as to the instance-aware GAN models, our model, AugGAN-Det, internalizes global and object style-transfer without explicitly aligning the instance features. Most importantly, a detector is not required at test time. Experimental results demonstrate that our model outperforms recent object-preserving and instance-level models and achieves state-of-the-art detection accuracy and visual perceptual quality.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.109416