Strong-correlation unsupervised cross-modal retrieval method guided by information amount

The invention relates to the technical field of cross-modal retrieval, in particular to a strong-correlation unsupervised cross-modal retrieval method guided by information amount, which is realized by the following steps of: firstly, extracting local features, global features and text features of a...

Full description

Saved in:
Bibliographic Details
Main Authors LI FANG, LUO XIAONAN, LAN RUSHI, DAI LIULIAN, YANG RUI
Format Patent
LanguageChinese
English
Published 15.09.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The invention relates to the technical field of cross-modal retrieval, in particular to a strong-correlation unsupervised cross-modal retrieval method guided by information amount, which is realized by the following steps of: firstly, extracting local features, global features and text features of an image; enhancing local features and global features of the image; carrying out regularization processing on the enhanced local features; performing orthogonal fusion on the global features and the local features of the image by using an image feature fusion network; fusing the image features and the text features by using a multi-modal fusion network according to a different-modal feature information quantity conversion proportion principle; and finally, mapping different modal features into Hash codes, and carrying out similarity sorting by utilizing a Hamming distance so as to obtain a retrieval result. The method focuses on enhancement and fusion of data features, more semantic information can be obtained, and
Bibliography:Application Number: CN202310657100