Fusing multi-cues description for partial-duplicate image retrieval

In traditional image retrieval, images are commonly represented using Bag-of-visual-Words (BoW) built from image local features. However, the lack of spatial and structural information suppresses its performance in applications. In this paper, we introduce a multi-cues description by fusing structur...

Full description

Saved in:
Bibliographic Details
Published inJournal of visual communication and image representation Vol. 25; no. 7; pp. 1726 - 1731
Main Authors Yan, Chenggang Clarence, Li, Liang, Wang, Zhan, Yin, Jian, Shi, Hailong, Jiang, Shuqiang, Huang, Qingming
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.10.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In traditional image retrieval, images are commonly represented using Bag-of-visual-Words (BoW) built from image local features. However, the lack of spatial and structural information suppresses its performance in applications. In this paper, we introduce a multi-cues description by fusing structural, content and spatial information for partial-duplicate image retrieval. Firstly, we propose a rotation-invariant Local Self-Similarity Descriptor (LSSD), which captures the internal structural layouts in the local textural self-similar regions around interest points. Then, based on the spatial pyramid model, we make use of both LSSD and SIFT to construct an image representation with multi-cues. Finally, we formulate the Semi-Relative Entropy as the distance metric. Comparison experiments with state-of-the-art methods on four popular databases show the efficiency and effectiveness of our approach.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1047-3203
1095-9076
DOI:10.1016/j.jvcir.2014.06.005