Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models
Extracting in-distribution (ID) images from noisy images scraped from the Internet is an important preprocessing for constructing datasets, which has traditionally been done manually. Automating this preprocessing with deep learning techniques presents two key challenges. First, images should be col...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.04.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Extracting in-distribution (ID) images from noisy images scraped from the
Internet is an important preprocessing for constructing datasets, which has
traditionally been done manually. Automating this preprocessing with deep
learning techniques presents two key challenges. First, images should be
collected using only the name of the ID class without training on the ID data.
Second, as we can see why COCO was created, it is crucial to identify images
containing not only ID objects but also both ID and out-of-distribution (OOD)
objects as ID images to create robust recognizers. In this paper, we propose a
novel problem setting called zero-shot in-distribution (ID) detection, where we
identify images containing ID objects as ID images (even if they contain OOD
objects), and images lacking ID objects as OOD images without any training. To
solve this problem, we leverage the powerful zero-shot capability of CLIP and
present a simple and effective approach, Global-Local Maximum Concept Matching
(GL-MCM), based on both global and local visual-text alignments of CLIP
features. Extensive experiments demonstrate that GL-MCM outperforms comparison
methods on both multi-object datasets and single-object ImageNet benchmarks.
The code will be available via https://github.com/AtsuMiyai/GL-MCM. |
---|---|
DOI: | 10.48550/arxiv.2304.04521 |