When assessing the quality of prediction models in machine learning, confidence intervals (CIs) for the generalization error, which measures predictive performance, are a crucial tool. Luckily, there exist many methods for computing such CIs and new promising approaches are continuously being proposed. Typically, these methods combine various resampling procedures, most popular among them cross-validation and bootstrapping, with different variance estimation techniques. Unfortunately, however, there is currently no consensus on when any of these combinations may be most reliably employed and how they generally compare. Here, we present the results of a large-scale study comparing CIs for the generalization error, where we empirically evaluate 13 dfferent CI methods on a total of 19 tabular regression and classification problems, using seven different<br>learning algorithms and a total of eight loss functions. Furthermore, we give an overview of the methodological foundations and inherent challenges of constructing CIs for the generalization error.
inproceedings FB25
BibTeXKey: FB25