AddGBoost: A gradient boosting-style algorithm based on strong learners

We present AddGBoost, a gradient boosting-style algorithm, wherein the decision tree is replaced by a succession of (possibly) stronger learners, which are optimized via a state-of-the-art hyperparameter optimizer. Through experiments over 90 regression datasets we show that AddGBoost emerges as the...

Full description

Saved in:
Bibliographic Details
Published inMachine learning with applications Vol. 7; p. 100243
Main Authors Sipper, Moshe, Moore, Jason H.
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 15.03.2022
Elsevier
Subjects
Online AccessGet full text
ISSN2666-8270
2666-8270
DOI10.1016/j.mlwa.2021.100243

Cover

Loading…
More Information
Summary:We present AddGBoost, a gradient boosting-style algorithm, wherein the decision tree is replaced by a succession of (possibly) stronger learners, which are optimized via a state-of-the-art hyperparameter optimizer. Through experiments over 90 regression datasets we show that AddGBoost emerges as the top performer for 33% (with 2 stages) up to 42% (with 5 stages) of the datasets, when compared with seven well-known machine-learning algorithms: KernelRidge, LassoLars, SGDRegressor, LinearSVR, DecisionTreeRegressor, HistGradientBoostingRegressor, and LGBMRegressor.
ISSN:2666-8270
2666-8270
DOI:10.1016/j.mlwa.2021.100243