Robust common visual pattern discovery using graph matching
•A new common visual patterns (CVPs) discovery framework is proposed.•It contains three successive steps: initialization, expansion and combination.•It can effectively deal with image photometric and geometric transformations.•The CVPs can be used for object recognition and near-duplicate image retr...
Saved in:
Published in | Journal of visual communication and image representation Vol. 24; no. 5; pp. 635 - 646 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Amsterdam
Elsevier Inc
01.07.2013
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •A new common visual patterns (CVPs) discovery framework is proposed.•It contains three successive steps: initialization, expansion and combination.•It can effectively deal with image photometric and geometric transformations.•The CVPs can be used for object recognition and near-duplicate image retrieval.
Discovering common visual patterns (CVPs) between two images is a difficult and time-consuming task, due to the photometric and geometric transformations. The state-of-the-art methods for CVPs discovery are either computationally expensive or have complicated constraints. In this paper, we formulate CVPs discovery as a graph matching problem, depending on pairwise geometric compatibility between feature correspondences. To efficiently find all CVPs, we propose a novel framework which consists of three components: Preliminary Initialization Optimization (PIO), Guided Expansion (GE) and Post Agglomerative Combination (PAC). PIO gets the initial CVPs and reduces the search space of CVPs discovery, based on the internal homogeneity of CVPs. Then, GE anchors on the initializations and gradually explores them, to find more and more correct correspondences. Finally, to reduce false and miss detection, PAC refines the discovery result in an agglomerative way. Experiments and applications conducted on benchmark datasets demonstrate the effectiveness and efficiency of our method. |
---|---|
Bibliography: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 ObjectType-Article-1 ObjectType-Feature-2 |
ISSN: | 1047-3203 1095-9076 |
DOI: | 10.1016/j.jvcir.2013.04.012 |