Home  | Publications | DPB22

Multi-Objective Counterfactual Fairness

MCML Authors

Abstract

When machine learning is used to automate judgments, e.g. in areas like lending or crime prediction, incorrect decisions can lead to adverse effects for affected individuals. This occurs, e.g., if the data used to train these models is based on prior decisions that are unfairly skewed against specific subpopulations. If models should automate decision-making, they must account for these biases to prevent perpetuating or creating discriminatory practices. Counter-factual fairness audits models with respect to a notion of fairness that asks for equal outcomes between a decision made in the real world and a counterfactual world where the individual subject to a decision comes from a different protected demographic group. In this work, we propose a method to conduct such audits without access to the underlying causal structure of the data generating process by framing it as a multi-objective optimization task that can be efficiently solved using a genetic algorithm.

inproceedings


GECCO 2022

Genetic and Evolutionary Computation Conference. Boston, MA, USA, Jul 09-13, 2022.
Conference logo
A Conference

Authors

S. DandlF. PfistererB. Bischl

Links

DOI

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: DPB22

Back to Top