Home  | Publications | BAP+22

Stop Measuring Calibration When Humans Disagree

MCML Authors

Link to Profile Barbara Plank PI Matchmaking

Barbara Plank

Prof. Dr.

Principal Investigator

Abstract

Calibration is a popular framework to evaluate whether a classifier knows when it does not know - i.e., its predictive probabilities are a good indication of how likely a prediction is to be correct. Correctness is commonly estimated against the human majority class. Recently, calibration to human majority has been measured on tasks where humans inherently disagree about which class applies. We show that measuring calibration to human majority given inherent disagreements is theoretically problematic, demonstrate this empirically on the ChaosNLI dataset, and derive several instance-level measures of calibration that capture key statistical properties of human judgements - including class frequency, ranking and entropy.

inproceedings


EMNLP 2022

Conference on Empirical Methods in Natural Language Processing. Abu Dhabi, United Arab Emirates, Nov 07-11, 2022.
Conference logo
A* Conference

Authors

J. Baan • W. Aziz • B. Plank • R. Fernandez

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: BAP+22

Back to Top