Home  | Publications | SKB25a

Fares on Fairness: Using a Total Error Framework to Examine the Role of Measurement and Representation in Training Data on Model Fairness and Bias

MCML Authors

Abstract

Data-driven decisions, often based on predictions from machine learning (ML) models are becoming ubiquitous. For these decisions to be just, the underlying ML models must be fair, i.e., work equally well for all parts of the population such as groups defined by gender or age. What are the logical next steps if, however, a trained model is accurate but not fair? How can we guide the whole data pipeline such that we avoid training unfair models based on inadequate data, recognizing possible sources of unfairness early on? How can the concepts of data-based sources of unfairness that exist in the fair ML literature be organized, perhaps in a way to gain new insight? In this paper, we explore two total error frameworks from the social sciences, Total Survey Error and its generalization Total Data Quality, to help elucidate issues related to fairness and trace its antecedents. The goal of this thought piece is to acquaint the fair ML community with these two frameworks, discussing errors of measurement and errors of representation through their organized structure. We illustrate how they may be useful, both practically and conceptually.

inproceedings


EWAF 2025

4th European Workshop on Algorithmic Fairness. Eindhoven, The Netherlands, Jun 30-Jul 02, 2025.

Authors

P. O. SchenkC. Kern • T. D. Buskirk

Links

URL

Research Area

 C4 | Computational Social Sciences

BibTeXKey: SKB25a

Back to Top