Learning mixture models with the regularized latent maximum entropy principle

This paper presents a new approach to estimating mixture models based on a recent inference principle we have proposed: the latent maximum entropy principle (LME). LME is different from Jaynes' maximum entropy principle, standard maximum likelihood, and maximum a posteriori probability estimati...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on neural networks Vol. 15; no. 4; pp. 903 - 916
Main Authors Shaojun Wang, Schuurmans, D., Fuchun Peng, Yunxin Zhao
Format Journal Article
LanguageEnglish
Published United States IEEE 01.07.2004
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a new approach to estimating mixture models based on a recent inference principle we have proposed: the latent maximum entropy principle (LME). LME is different from Jaynes' maximum entropy principle, standard maximum likelihood, and maximum a posteriori probability estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the expectation maximization (EM) algorithm can be developed. We show that a regularized version of LME (RLME), is effective at estimating mixture models. It generally yields better results than plain LME, which in turn is often better than maximum likelihood and maximum a posterior estimation, particularly when inferring latent variable models from small amounts of data.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:1045-9227
1941-0093
DOI:10.1109/TNN.2004.828755