Equalization Loss v2: A New Gradient Balance Approach for Long-tailed Object Detection
Recently proposed decoupled training methods emerge as a dominant paradigm for long-tailed object detection. But they require an extra fine-tuning stage, and the dis-jointed optimization of representation and classifier might lead to suboptimal results. However, end-to-end training methods, like equ...
Saved in:
Published in | 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 1685 - 1694 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently proposed decoupled training methods emerge as a dominant paradigm for long-tailed object detection. But they require an extra fine-tuning stage, and the dis-jointed optimization of representation and classifier might lead to suboptimal results. However, end-to-end training methods, like equalization loss (EQL), still perform worse than decoupled training methods. In this paper, we re-veal the main issue in long-tailed object detection is the imbalanced gradients between positives and negatives, and find that EQL does not solve it well. To address the problem of imbalanced gradients, we introduce a new version of equalization loss, called equalization loss v2 (EQL v2), a novel gradient guided reweighing mechanism that re-balances the training process for each category independently and equally. Extensive experiments are performed on the challenging LVIS benchmark. EQL v2 outperforms origin EQL by about 4 points overall AP with 14 ∼ 18 points improvements on the rare categories. More importantly, it also surpasses decoupled training methods. With-out further tuning for the Open Images dataset, EQL v2 improves EQL by 7.3 points AP, showing strong generalization ability. Codes have been released at https://github.com/tztztztztz/eqlv2 |
---|---|
ISSN: | 2575-7075 |
DOI: | 10.1109/CVPR46437.2021.00173 |