Accelerate tree ensemble learning based on adaptive sampling

Gradient Boosting Decision Tree (GBDT) has been used extensively in machine learning applications due to its superiority in efficiency, accuracy and interpretability. Although there are already excellent and popular open source implementations such as XGBoost and LightGBM, etc., however, large data...

Full description

Saved in:
Bibliographic Details
Published inJournal of computational methods in sciences and engineering Vol. 20; no. 2; pp. 509 - 519
Main Authors Zhou, Yu, Li, Hui, Chen, Mei, Dai, Zhenyu, Zhu, Ming
Format Journal Article
LanguageEnglish
Published London, England SAGE Publications 01.01.2020
Sage Publications Ltd
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Gradient Boosting Decision Tree (GBDT) has been used extensively in machine learning applications due to its superiority in efficiency, accuracy and interpretability. Although there are already excellent and popular open source implementations such as XGBoost and LightGBM, etc., however, large data size tend to make scalable and efficient learning to be very difficult. Since sampling is an efficient technique for alleviate massive data analysis performance issues, we exploit sampling techniques to address this problem. In this paper, we propose the AdaGBDT approach which apply an adaptive sampling method based on Massart’s Inequality to build GBDT model and draws samples in an on-line manner without manually specifying sample size. AdaGBDT is implemented by integrating the adaptive sampling method into LightGBM. The experimental results showed that, AdaGBDT not only keeps a small sample size and has a better training performance than LightGBM, but also subject to the constraint of estimation accuracy and confidence.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1472-7978
1875-8983
DOI:10.3233/JCM-193912