Home  | Publications | HGW+24

Investigating Labeler Bias in Face Annotation for Machine Learning

MCML Authors

Link to Profile Albrecht Schmidt

Albrecht Schmidt

Prof. Dr.

Principal Investigator

Abstract

In a world increasingly reliant on artificial intelligence, it is more important than ever to consider the ethical implications of artificial intelligence. One key under-explored challenge is labeler bias — bias introduced by individuals who label datasets — which can create inherently biased datasets for training and subsequently lead to inaccurate or unfair decisions in healthcare, employment, education, and law enforcement. Hence, we conducted a study (N=98) to investigate and measure the existence of labeler bias using images of people from different ethnicities and sexes in a labeling task. Our results show that participants hold stereotypes that influence their decision-making process and that labeler demographics impact assigned labels. We also discuss how labeler bias influences datasets and, subsequently, the models trained on them. Overall, a high degree of transparency must be maintained throughout the entire artificial intelligence training process to identify and correct biases in the data as early as possible.

inproceedings


HHAI 2024

3rd International Conference on Hybrid Human-Artificial Intelligence. Malmö, Sweden, Jun 10-14, 2024.

Authors

L. Haliburton • S. Ghebremedhin • R. Welsch • A. Schmidt • S. Mayer

Links

DOI

Research Area

 C5 | Humane AI

BibTeXKey: HGW+24

Back to Top