Foregroundness-Aware Task Disentanglement and Self-Paced Curriculum Learning for Domain Adaptive Object Detection
Unsupervised domain adaptive object detection (UDA-OD) is a challenging problem since it needs to locate and recognize objects while maintaining the generalization ability across domains. Most existing UDA-OD methods directly integrate the adaptive modules into the detectors. This integration proced...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 36; no. 1; pp. 369 - 380 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.01.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Unsupervised domain adaptive object detection (UDA-OD) is a challenging problem since it needs to locate and recognize objects while maintaining the generalization ability across domains. Most existing UDA-OD methods directly integrate the adaptive modules into the detectors. This integration procedure can significantly sacrifice the detection performances, though it enhances the generalization ability. To solve this problem, we propose an effective framework, named foregroundness-aware task disentanglement and self-paced curriculum adaptation (FA-TDCA), to disentangle the UDA-OD task into four independent subtasks of source detector pretraining, classification adaptation, location adaptation, and target detector training. The disentanglement can transfer the knowledge effectively while maintaining the detection performance of our model. In addition, we propose a new metric, i.e., foregroundness, and use it to evaluate the confidence of the location result. We use both foregroundness and classification confidence to assess the label quality of the proposals. For effective knowledge transfer across domains, we utilize a self-paced curriculum learning paradigm to train adaptors and gradually improve the quality of the pseudolabels associated with the target samples. Experiment results indicate that our method achieves state-of-the-art results on four cross-domain object detection tasks. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2162-237X 2162-2388 2162-2388 |
DOI: | 10.1109/TNNLS.2023.3331778 |