Multi-Scale Multi-Level Generative Model in Scene Classification

Previous works show that the probabilistic Latent Semantic Analysis (pLSA) model is one of the best generative models for scene categorization and can obtain an acceptable classification accuracy. However, this method uses a certain number of topics to construct the final image representation. In su...

Full description

Saved in:
Bibliographic Details
Published inIEICE Transactions on Information and Systems Vol. E94.D; no. 1; pp. 167 - 170
Main Authors XIE, Wenjie, XU, De, TANG, Yingjun, CUI, Geng
Format Journal Article
LanguageEnglish
Japanese
Published Oxford The Institute of Electronics, Information and Communication Engineers 01.01.2011
Oxford University Press
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Previous works show that the probabilistic Latent Semantic Analysis (pLSA) model is one of the best generative models for scene categorization and can obtain an acceptable classification accuracy. However, this method uses a certain number of topics to construct the final image representation. In such a way, it restricts the image description to one level of visual detail and cannot generate a higher accuracy rate. In order to solve this problem, we propose a novel generative model, which is referred to as multi-scale multi-level probabilistic Latent Semantic Analysis model (msml-pLSA). This method consists of two parts: multi-scale part, which extracts visual details from the image of diverse resolutions, and multi-level part, which concentrates multiple levels of topic representation to model scene. The msml-pLSA model allows for the description of fine and coarse local image detail in one framework. The proposed method is evaluated on the well-known scene classification dataset with 15 scene categories, and experimental results show that the proposed msml-pLSA model can improve the classification accuracy compared with the typical classification methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0916-8532
1745-1361
1745-1361
DOI:10.1587/transinf.E94.D.167