Content-based image clustering via multi-view visual vocabularies

Content-based image clustering is a challenging but useful topic for the efficient management of image databases and effective image retrievals especially with the emerging of huge number of images on the websites and in our common life. Thanks to the big success of bag of words (BOW) model in the f...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the 31st Chinese Control Conference pp. 3974 - 3977
Main Authors Xu Wangming, Liu Xinhai, Fang Kangling
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.07.2012
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Content-based image clustering is a challenging but useful topic for the efficient management of image databases and effective image retrievals especially with the emerging of huge number of images on the websites and in our common life. Thanks to the big success of bag of words (BOW) model in the field of text mining, visual vocabulary composed of bag of visual words(BOVW) is employed to the field of content-based image processing and analysis (including image clustering) in recent years. In practice, a single visual vocabulary usually leads to the irregular partition for image database due to the instability of the random initialization in general clustering algorithms such as K-Means and due to the lack of semantic meanings in visual words, To circumvent these drawbacks, a new image clustering strategy based on multiple visual vocabularies is proposed in this paper, which can provide the multi-view information from the given image database. This new strategy is based on a tensor method named multi-linear singular value decomposition (MLSVD), which can leverage the effect of each view to facilitate the clustering procedure. The experiments on the subset of Caltech 101 image database show that this strategy can obtain the robust and even better clustering results by integrating multi-view information.
ISBN:1467325813
9781467325813
ISSN:1934-1768
2161-2927