首页|MSBoost: Using Model Selection with Multiple Base Estimators for Gradient Boosti ng

MSBoost: Using Model Selection with Multiple Base Estimators for Gradient Boosti ng

扫码查看
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – According to news reporting based on a preprint abstract, our journalists obtained the following quote sourced from os f.io: “Gradient boosting is a widely used machine learning algorithm for tabular regre ssion, classification and ranking. Although, most of the open source implementat ions of gradient boosting such as XGBoost, LightGBM and others have used decisio n trees as the sole base estimator for gradient boosting. “This paper, for the first time, takes an alternative path of not just relying o n a static base estimator (usually decision tree), and rather trains a list of m odels in parallel on the residual errors of the previous layer and then selects the model with the least validation error as the base estimator for a particular layer.

CyborgsEmerging TechnologiesMachine Learning

2024

Robotics & Machine Learning Daily News

Robotics & Machine Learning Daily News

ISSN:
年,卷(期):2024.(Jul.1)