Neural Networks2022,Vol.14716.DOI:10.1016/j.neunet.2021.12.016

Fully corrective gradient boosting with squared hinge: Fast learning rates and early stopping

Zeng J. Zhang M. Lin S.-B.
Neural Networks2022,Vol.14716.DOI:10.1016/j.neunet.2021.12.016

Fully corrective gradient boosting with squared hinge: Fast learning rates and early stopping

Zeng J. 1Zhang M. 1Lin S.-B.2
扫码查看

作者信息

  • 1. School of Computer and Information Engineering Jiangxi Normal University
  • 2. Center for Intelligent Decision-Making and Machine Learning School of Management Xi'an Jiaotong
  • 折叠

Abstract

? 2021 Elsevier LtdIn this paper, we propose an efficient boosting method with theoretical guarantees for binary classification. There are three key ingredients of the proposed boosting method: a fully corrective greedy (FCG) update, a differentiable squared hinge (also called truncated quadratic) loss function, and an efficient alternating direction method of multipliers (ADMM) solver. Compared with traditional boosting methods, on one hand, the FCG update accelerates the numerical convergence rate, and on the other hand, the squared hinge loss inherits the robustness of the hinge loss for classification and maintains the theoretical benefits of the square loss in regression. The ADMM solver with guaranteed fast convergence then provides an efficient implementation for the proposed boosting method. We conduct both theoretical analysis and numerical verification to show the outperformance of the proposed method. Theoretically, a fast learning rate of order O((m/logm)?1/2) is proved under certain standard assumptions, where m is the size of sample set. Numerically, a series of toy simulations and real data experiments are carried out to verify the developed theory.

Key words

Boosting/Early stopping/Fully corrective greedy/Learning theory/Squared hinge

引用本文复制引用

出版年

2022
Neural Networks

Neural Networks

EISCI
ISSN:0893-6080
被引量5
参考文献量56
段落导航相关论文