RPCL: A Framework for Improving Cross-Domain Detection with Auxiliary Tasks

Cross-Domain Detection (XDD) aims to train an object detector using labeled image from a source domain but have good performance in the target domain with only unlabeled images. Existing approaches achieve this either by aligning the feature maps or the region proposals from the two domains, or by t...

Full description

Saved in:
Bibliographic Details
Main Authors Li, Kai, Wigington, Curtis, Tensmeyer, Chris, Morariu, Vlad I, Zhao, Handong, Manjunatha, Varun, Barmpalios, Nikolaos, Fu, Yun
Format Journal Article
LanguageEnglish
Published 17.04.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Cross-Domain Detection (XDD) aims to train an object detector using labeled image from a source domain but have good performance in the target domain with only unlabeled images. Existing approaches achieve this either by aligning the feature maps or the region proposals from the two domains, or by transferring the style of source images to that of target image. Contrasted with prior work, this paper provides a complementary solution to align domains by learning the same auxiliary tasks in both domains simultaneously. These auxiliary tasks push image from both domains towards shared spaces, which bridges the domain gap. Specifically, this paper proposes Rotation Prediction and Consistency Learning (PRCL), a framework complementing existing XDD methods for domain alignment by leveraging the two auxiliary tasks. The first one encourages the model to extract region proposals from foreground regions by rotating an image and predicting the rotation angle from the extracted region proposals. The second task encourages the model to be robust to changes in the image space by optimizing the model to make consistent class predictions for region proposals regardless of image perturbations. Experiments show the detection performance can be consistently and significantly enhanced by applying the two proposed tasks to existing XDD methods.
DOI:10.48550/arxiv.2104.08689