Re-scoring using image-language similarity for few-shot object detection

Few-shot object detection, which focuses on detecting novel objects with few labels, is an emerging challenge in the community. Recent studies show that adapting a pre-trained model or modified loss function can improve performance. In this paper, we explore leveraging the power of Contrastive Langu...

Full description

Saved in:
Bibliographic Details
Published inComputer vision and image understanding Vol. 241; p. 103956
Main Authors Jung, Min Jae, Han, Seung Dae, Kim, Joohee
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Few-shot object detection, which focuses on detecting novel objects with few labels, is an emerging challenge in the community. Recent studies show that adapting a pre-trained model or modified loss function can improve performance. In this paper, we explore leveraging the power of Contrastive Language-Image Pre-training (CLIP) and hard negative classification loss in low data setting. Specifically, we propose Re-scoring using Image-language Similarity for Few-shot object detection (RISF) which extends Faster R-CNN by introducing Calibration Module using CLIP (CM-CLIP) and Background Negative Re-scale Loss (BNRL). The former adapts CLIP, which performs zero-shot classification, to re-score the classification scores of a detector using image-class similarities, the latter is modified classification loss considering the punishment for fake backgrounds as well as confusing categories on a generalized few-shot object detection dataset. Extensive experiments on MS-COCO and PASCAL VOC show that the proposed RISF substantially outperforms the state-of-the-art approaches. Code is available at: https://github.com/INFINIQ-AI1/RISF •We propose a modified loss that is effective in Few-Shot Object Detection (FSOD).•We prove our modified loss function mathematically rigorously.•We are the earlier study in FSOD to utilize the similarity between image and text.•Extensive experiments demonstrate that our approaches substantially improve the SOTA.
ISSN:1077-3142
1090-235X
DOI:10.1016/j.cviu.2024.103956