A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch

We address the problem of retrieving images with both a sketch and a text query. We present TASK-former (Text And SKetch transformer), an end-to-end trainable model for image retrieval using a text description and a sketch as input. We argue that both input modalities complement each other in a mann...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Sangkloy, Patsorn, Jitkrittum, Wittawat, Yang, Diyi, Hays, James
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 05.08.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We address the problem of retrieving images with both a sketch and a text query. We present TASK-former (Text And SKetch transformer), an end-to-end trainable model for image retrieval using a text description and a sketch as input. We argue that both input modalities complement each other in a manner that cannot be achieved easily by either one alone. TASK-former follows the late-fusion dual-encoder approach, similar to CLIP, which allows efficient and scalable retrieval since the retrieval set can be indexed independently of the queries. We empirically demonstrate that using an input sketch (even a poorly drawn one) in addition to text considerably increases retrieval recall compared to traditional text-based image retrieval. To evaluate our approach, we collect 5,000 hand-drawn sketches for images in the test set of the COCO dataset. The collected sketches are available a https://janesjanes.github.io/tsbir/.
ISSN:2331-8422