Home  | Publications | SBR23

Accelerated Componentwise Gradient Boosting Using Efficient Data Representation and Momentum-Based Optimization

MCML Authors

Link to Profile Bernd Bischl PI Matchmaking

Bernd Bischl

Prof. Dr.

Director

Link to Profile David Rügamer PI Matchmaking

David Rügamer

Prof. Dr.

Principal Investigator

Abstract

Componentwise boosting (CWB), also known as model-based boosting, is a variant of gradient boosting that builds on additive models as base learners to ensure interpretability. CWB is thus often used in research areas where models are employed as tools to explain relationships in data. One downside of CWB is its computational complexity in terms of memory and runtime. In this article, we propose two techniques to overcome these issues without losing the properties of CWB: feature discretization of numerical features and incorporating Nesterov momentum into functional gradient descent. As the latter can be prone to early overfitting, we also propose a hybrid approach that prevents a possibly diverging gradient descent routine while ensuring faster convergence. Our adaptions improve vanilla CWB by reducing memory consumption and speeding up the computation time per iteration (through feature discretization) while also enabling CWB learn faster and hence to require fewer iterations in total using momentum. We perform extensive benchmarks on multiple simulated and real-world datasets to demonstrate the improvements in runtime and memory consumption while maintaining state-of-the-art estimation and prediction performance.

article


Journal of Computational and Graphical Statistics

32.2. Apr. 2023.
Top Journal

Authors

D. SchalkB. BischlD. Rügamer

Links

DOI

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: SBR23

Back to Top