Multi-resolution boosting for classification and regression problems

Various forms of additive modeling techniques have been successfully used in many data mining and machine learning–related applications. In spite of their great success, boosting algorithms still suffer from a few open-ended problems that require closer investigation. The efficiency of any additive...

Full description

Saved in:
Bibliographic Details
Published inKnowledge and information systems Vol. 29; no. 2; pp. 435 - 456
Main Authors Reddy, Chandan K., Park, Jin-Hyeong
Format Journal Article
LanguageEnglish
Published London Springer-Verlag 01.11.2011
Springer
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Various forms of additive modeling techniques have been successfully used in many data mining and machine learning–related applications. In spite of their great success, boosting algorithms still suffer from a few open-ended problems that require closer investigation. The efficiency of any additive modeling technique relies significantly on the choice of the weak learners and the form of the loss function. In this paper, we propose a novel multi-resolution approach for choosing the weak learners during additive modeling. Our method applies insights from multi-resolution analysis and chooses the optimal learners at multiple resolutions during different iterations of the boosting algorithms, which are simple yet powerful additive modeling methods. We demonstrate the advantages of this novel framework in both classification and regression problems and show results on both synthetic and real-world datasets taken from the UCI machine learning repository. Though demonstrated specifically in the context of boosting algorithms, our framework can be easily accommodated in general additive modeling techniques. Similarities and distinctions of the proposed algorithm with the popularly used methods like radial basis function networks are also discussed.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0219-1377
0219-3116
DOI:10.1007/s10115-010-0358-0