Unsupervised Cross-Media Retrieval Using Domain Adaptation With Scene Graph

Existing cross-media retrieval methods are usually conducted under the supervised setting, which need lots of annotated training data. Generally, it is extremely labor-consuming to annotate cross-media data. So unsupervised cross-media retrieval is highly demanded, which is very challenging because...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 30; no. 11; pp. 4368 - 4379
Main Authors Peng, Yuxin, Chi, Jingze
Format Journal Article
LanguageEnglish
Published New York IEEE 01.11.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Existing cross-media retrieval methods are usually conducted under the supervised setting, which need lots of annotated training data. Generally, it is extremely labor-consuming to annotate cross-media data. So unsupervised cross-media retrieval is highly demanded, which is very challenging because it has to handle heterogeneous distributions across different media types without any annotated information. To address the above challenge, this paper proposes Domain Adaptation with Scene Graph (DASG) approach, which transfers knowledge from the source domain to improve cross-media retrieval in the target domain. Our DASG approach takes Visual Genome as the source domain, which contains image knowledge in the form of scene graph. The main contributions of this paper are as follows: First, we propose to address unsupervised cross-media retrieval by domain adaptation. Instead of using the labor-consuming annotated information of cross-media data in the training stage, our DASG approach learns cross-media correlation knowledge from Visual Genome, and then transfers the knowledge to cross-media retrieval through media alignment and distribution alignment. Second, our DASG approach utilizes fine-grained information via scene graph representation to enhance generalization capability across domains. The generated scene graph representation builds ( subject <inline-formula> <tex-math notation="LaTeX">\rightarrow </tex-math></inline-formula> relationship <inline-formula> <tex-math notation="LaTeX">\rightarrow </tex-math></inline-formula> object ) triplets by exploiting objects and relationships within image and text, which makes the cross-media correlation more precise and promotes unsupervised cross-media retrieval. Third, we exploit the related tasks including object and relationship detection for learning more discriminative features across domains. Leveraging the semantic information of objects and relationships improves cross-media correlation learning for retrieval. Experiments on two widely-used cross-media retrieval datasets, namely Flickr-30K and MS-COCO, show the effectiveness of our DASG approach.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2019.2953692