A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch
We address the problem of retrieving images with both a sketch and a text query. We present TASK-former (Text And SKetch transformer), an end-to-end trainable model for image retrieval using a text description and a sketch as input. We argue that both input modalities complement each other in a mann...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
05.08.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We address the problem of retrieving images with both a sketch and a text
query. We present TASK-former (Text And SKetch transformer), an end-to-end
trainable model for image retrieval using a text description and a sketch as
input. We argue that both input modalities complement each other in a manner
that cannot be achieved easily by either one alone. TASK-former follows the
late-fusion dual-encoder approach, similar to CLIP, which allows efficient and
scalable retrieval since the retrieval set can be indexed independently of the
queries. We empirically demonstrate that using an input sketch (even a poorly
drawn one) in addition to text considerably increases retrieval recall compared
to traditional text-based image retrieval. To evaluate our approach, we collect
5,000 hand-drawn sketches for images in the test set of the COCO dataset. The
collected sketches are available a https://janesjanes.github.io/tsbir/. |
---|---|
DOI: | 10.48550/arxiv.2208.03354 |