Bagged ensembles with tunable parameters

Ensemble learning is a popular classification method where many individual simple learners contribute to a final prediction. Constructing an ensemble of learners has been shown to often improve prediction accuracy over a single learner. Bagging and boosting are the most common ensemble methods, each...

Full description

Saved in:
Bibliographic Details
Published inComputational intelligence Vol. 35; no. 1; pp. 184 - 203
Main Authors Pham, Hieu, Olafsson, Sigurdur
Format Journal Article
LanguageEnglish
Published Hoboken Blackwell Publishing Ltd 01.02.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Ensemble learning is a popular classification method where many individual simple learners contribute to a final prediction. Constructing an ensemble of learners has been shown to often improve prediction accuracy over a single learner. Bagging and boosting are the most common ensemble methods, each with distinct advantages. While boosting methods are typically very tunable with numerous parameters, to date, the type of flexibility this allows has been missing for general bagging ensembles. In this paper, we propose a new tunable weighted bagged ensemble methodology, resulting in a very flexible method for classification. We explore the impact tunable weighting has on the votes of each learner in an ensemble and compare the results with pure bagging and the best known bagged ensemble method, namely, the random forest.
ISSN:0824-7935
1467-8640
DOI:10.1111/coin.12198