08
Aug

©jittawit.21 - stock.adobe.com
AI Keynote Series
Auditing Fairness Under Unobserved Confounding
Michael Oberst, Johns Hopkins University
08.08.2024
4:00 pm - 5:30 pm
Online via Zoom
Inequity in resource allocation has been well-documented in many domains, such as healthcare. Causal measures of equity / fairness seek to isolate biases in allocation that are not explained by other factors, such as underlying need. However, these fairness measures require the (strong) assumption that we observe all relevant indicators of need, an assumption that rarely holds in practice. For instance, if resources are allocated based on indicators of need that are not recorded in our data ("unobserved confounders"), we may understate (or overstate) the amount of inequity.
In this talk, I will present work demonstrating that we can still give informative bounds on certain causal measures of fairness, even while relaxing (or even eliminating) the assumption that all relevant indicators of need are observed. We use the fact that in many real-world settings (e.g., the release of a new treatment) we have data from prior to any allocation, which can be used to derive unbiased estimates of need. This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner. For instance, in a real-world study of Paxlovid allocation, we show that observed racial inequity cannot be explained by unobserved confounders of the same strength as important observed covariates.
Organized by:
Institute of AI in Management LMU Munich
Related

Colloquium • 25.06.2025 • LMU Department of Statistics and via zoom
Practical Causal Reasoning as a Means for Ethical ML
25.06.25, 4:15-5:45 pm: Isabel Valera, Uni Saarbrücken explores fairness in ML and introduces DeCaFlow, a causal model for counterfactuals.

Colloquium • 11.06.2025 • LMU Department of Statistics and via zoom
Veridical Data Science and PCS Uncertainty Quantification
11.06.25, 4:15-5:45 pm: Bin Yu, UC Berkeley on how PCS improves AI reliability by tackling hidden uncertainty in data science decisions.