Home  | Publications | RGL+25

A Statistical Case Against Empirical Human-AI Alignment

MCML Authors

Abstract

Empirical human-AI alignment aims to make AI systems act in line with observed human behavior. While noble in its goals, we argue that empirical alignment can inadvertently introduce statistical biases that warrant caution. This position paper thus advocates against naive empirical alignment, offering prescriptive alignment and a posteriori empirical alignment as alternatives. We substantiate our principled argument by tangible examples like human-centric decoding of language models.

misc


Preprint

Feb. 2025

Authors

J. Rodemann • E. Garces Arias • C. Luther • C. Jansen • T. Augustin

Links


Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: RGL+25

Back to Top