Home  | Publications | APB+20

Debiasing Classifiers: Is Reality at Variance With Expectation?

MCML Authors

Link to Profile Bernd Bischl PI Matchmaking

Bernd Bischl

Prof. Dr.

Director

Abstract

We present an empirical study of debiasing methods for classifiers, showing that debiasers often fail in practice to generalize out-of-sample, and can in fact make fairness worse rather than better. A rigorous evaluation of the debiasing treatment effect requires extensive cross-validation beyond what is usually done. We demonstrate that this phenomenon can be explained as a consequence of bias-variance trade-off, with an increase in variance necessitated by imposing a fairness constraint. Follow-up experiments validate the theoretical prediction that the estimation variance depends strongly on the base rates of the protected class. Considering fairness--performance trade-offs justifies the counterintuitive notion that partial debiasing can actually yield better results in practice on out-of-sample data.

misc


Preprint

Nov. 2020

Authors

A. Agrawal • F. PfistererB. Bischl • F. Buet-Golfouse • S. Sood • J. Chen • S. Shah • S. Vollmer

Links


Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: APB+20

Back to Top