Classification using hierarchical naive bayes models

Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe an i...

Full description

Saved in:
Bibliographic Details
Published inMachine learning Vol. 63; no. 2; pp. 135 - 159
Main Authors LANGSETH, Helge, NIELSEN, Thomas D
Format Journal Article
LanguageEnglish
Published Dordrecht Springer 01.05.2006
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe an instance are conditionally independent given the class of that instance. When this assumption is violated (which is often the case in practice) it can reduce classification accuracy due to "information double-counting" and interaction omission. In this paper we focus on a relatively new set of models, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models in the context of classification. Experimental results show that the learned models can significantly improve classification accuracy as compared to other frameworks.[PUBLICATION ABSTRACT]
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0885-6125
1573-0565
DOI:10.1007/s10994-006-6136-2