Boosting as a kernel-based method
Boosting combines weak (biased) learners to obtain effective learning algorithms for classification and prediction. In this paper, we show a connection between boosting and kernel-based methods, highlighting both theoretical and practical applications. In the ℓ 2 context, we show that boosting with...
Saved in:
Published in | Machine learning Vol. 108; no. 11; pp. 1951 - 1974 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.11.2019
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Boosting combines weak (biased) learners to obtain effective learning algorithms for classification and prediction. In this paper, we show a connection between boosting and kernel-based methods, highlighting both theoretical and practical applications. In the
ℓ
2
context, we show that boosting with a weak learner defined by a kernel
K
is equivalent to estimation with a special
boosting kernel
. The number of boosting iterations can then be modeled as a continuous hyperparameter, and fit (along with other parameters) using standard techniques. We then generalize the boosting kernel to a broad new class of boosting approaches for general weak learners, including those based on the
ℓ
1
, hinge and Vapnik losses. We develop fast hyperparameter tuning for this class, which has a wide range of applications including robust regression and classification. We illustrate several applications using synthetic and real data. |
---|---|
ISSN: | 0885-6125 1573-0565 |
DOI: | 10.1007/s10994-019-05797-z |