Not All Labels Are Equal: Rationalizing The Labeling Costs for Training Object Detection

Deep neural networks have reached high accuracy on object detection but their success hinges on large amounts of labeled data. To reduce the labels dependency, various active learning strategies have been proposed, based on the confidence of the detector. However, these methods are biased towards hi...

Full description

Saved in:
Bibliographic Details
Published in2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 14472 - 14481
Main Authors Elezi, Ismail, Yu, Zhiding, Anandkumar, Anima, Leal-Taixe, Laura, Alvarez, Jose M.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep neural networks have reached high accuracy on object detection but their success hinges on large amounts of labeled data. To reduce the labels dependency, various active learning strategies have been proposed, based on the confidence of the detector. However, these methods are biased towards high-performing classes and lead to acquired datasets that are not good representatives of the testing set data. In this work, we propose a unified frame-work for active learning, that considers both the uncertainty and the robustness of the detector, ensuring that the network performs well in all classes. Furthermore, our method leverages auto-labeling to suppress a potential distribution drift while boosting the performance of the model. Experiments on PASCAL VOC07+12 and MS-COCO show that our method consistently outperforms a wide range of active learning methods, yielding up to a 7.7% improvement in mAP, or up to 82% reduction in labeling cost. Code is available at https://github.com/NVlabs/AL-SSL.
ISSN:2575-7075
DOI:10.1109/CVPR52688.2022.01409