Deep Metric Multi-View Hashing for Multimedia Retrieval

Learning the hash representation of multi-view heterogeneous data is an important task in multimedia retrieval. However, existing methods fail to effectively fuse the multi-view features and utilize the metric information provided by the dissimilar samples, leading to limited retrieval precision. Cu...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE International Conference on Multimedia and Expo (ICME) pp. 1955 - 1960
Main Authors Zhu, Jian, Ruan, Xiaohu, Cheng, Yongli, Huang, Zhangmin, Cui, Yu, Zeng, Lingfang
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Learning the hash representation of multi-view heterogeneous data is an important task in multimedia retrieval. However, existing methods fail to effectively fuse the multi-view features and utilize the metric information provided by the dissimilar samples, leading to limited retrieval precision. Current methods utilize weighted sum or concatenation to fuse the multi-view features. We argue that these fusion methods cannot capture the interaction among different views. Furthermore, these methods ignored the information provided by the dissimilar samples. We propose a novel deep metric multi-view hashing (DMMVH) method to address the mentioned problems. Extensive empirical evidence is presented to show that gate-based fusion is better than typical methods. We introduce deep metric learning to the multi-view hashing problems, which can utilize metric information of dissimilar samples. On the MIR-Flickr25K, MS COCO, and NUS-WIDE, our method outperforms the current state-of-the-art methods by a large margin (up to 15.28 mean Average Precision (mAP) improvement).
ISSN:1945-788X
DOI:10.1109/ICME55011.2023.00335