Home  | Publications | BDS23

Causal Fair Machine Learning via Rank-Preserving Interventional Distributions

MCML Authors

Abstract

A decision can be defined as fair if equal individuals are treated equally and unequals unequally. Adopting this definition, the task of designing machine learning models that mitigate unfairness in automated decision-making systems must include causal thinking when introducing protected attributes. Following a recent proposal, we define individuals as being normatively equal if they are equal in a fictitious, normatively desired (FiND) world, where the protected attribute has no (direct or indirect) causal effect on the target. We propose rank-preserving interventional distributions to define an estimand of this FiND world and a warping method for estimation. Evaluation criteria for both the method and resulting model are presented and validated through simulations and empirical data. With this, we show that our warping approach effectively identifies the most discriminated individuals and mitigates unfairness.

inproceedings


ECAI 2023

1st Workshop on Fairness and Bias in AI co-located with the 26th European Conference on Artificial Intelligence. Kraków, Poland, Sep 30-Oct 04, 2023.
Conference logo
A Conference

Authors

L. BothmannS. DandlM. Schomaker

Links

PDF

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: BDS23

Back to Top