Triplet-Based Deep Hashing Network for Cross-Modal Retrieval

Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 27; no. 8; pp. 3893 - 3903
Main Authors Deng, Cheng, Chen, Zhaojia, Liu, Xianglong, Gao, Xinbo, Tao, Dacheng
Format Journal Article
LanguageEnglish
Published United States IEEE 01.08.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing cross-modal hashing cannot obtain powerful hash codes due to their ignoring the relative similarity between heterogeneous data that contains richer semantic information, leading to unsatisfactory retrieval performance. In this paper, we propose a triplet-based deep hashing (TDH) network for cross-modal retrieval. First, we utilize the triplet labels, which describe the relative relationships among three instances as supervision in order to capture more general semantic correlations between cross-modal instances. We then establish a loss function from the inter-modal view and the intra-modal view to boost the discriminative abilities of the hash codes. Finally, graph regularization is introduced into our proposed TDH method to preserve the original semantic similarity between hash codes in Hamming space. Experimental results show that our proposed method outperforms several state-of-the-art approaches on two popular cross-modal data sets.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2018.2821921