AddGBoost: A gradient boosting-style algorithm based on strong learners
We present AddGBoost, a gradient boosting-style algorithm, wherein the decision tree is replaced by a succession of (possibly) stronger learners, which are optimized via a state-of-the-art hyperparameter optimizer. Through experiments over 90 regression datasets we show that AddGBoost emerges as the...
Saved in:
Published in | Machine learning with applications Vol. 7; p. 100243 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
15.03.2022
Elsevier |
Subjects | |
Online Access | Get full text |
ISSN | 2666-8270 2666-8270 |
DOI | 10.1016/j.mlwa.2021.100243 |
Cover
Loading…
Summary: | We present AddGBoost, a gradient boosting-style algorithm, wherein the decision tree is replaced by a succession of (possibly) stronger learners, which are optimized via a state-of-the-art hyperparameter optimizer. Through experiments over 90 regression datasets we show that AddGBoost emerges as the top performer for 33% (with 2 stages) up to 42% (with 5 stages) of the datasets, when compared with seven well-known machine-learning algorithms: KernelRidge, LassoLars, SGDRegressor, LinearSVR, DecisionTreeRegressor, HistGradientBoostingRegressor, and LGBMRegressor. |
---|---|
ISSN: | 2666-8270 2666-8270 |
DOI: | 10.1016/j.mlwa.2021.100243 |