Semi Supervised Phrase Localization in a Bidirectional Caption-Image Retrieval Framework

We introduce a novel deep neural network architecture that links visual regions to corresponding textual segments including phrases and words. To accomplish this task, our architecture makes use of the rich semantic information available in a joint embedding space of multi-modal data. From this join...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Das, Deepan, Noor Mohammed Ghouse, Verma, Shashank, Li, Yin
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 08.08.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We introduce a novel deep neural network architecture that links visual regions to corresponding textual segments including phrases and words. To accomplish this task, our architecture makes use of the rich semantic information available in a joint embedding space of multi-modal data. From this joint embedding space, we extract the associative localization maps that develop naturally, without explicitly providing supervision during training for the localization task. The joint space is learned using a bidirectional ranking objective that is optimized using a \(N\)-Pair loss formulation. This training mechanism demonstrates the idea that localization information is learned inherently while optimizing a Bidirectional Retrieval objective. The model's retrieval and localization performance is evaluated on MSCOCO and Flickr30K Entities datasets. This architecture outperforms the state of the art results in the semi-supervised phrase localization setting.
ISSN:2331-8422