Home  | Publications | HWN+25a

Systematic Evaluation of Uncertainty Estimation Methods in Large Language Models

MCML Authors

Abstract

Large language models (LLMs) produce outputs with varying levels of uncertainty, and, just as often, varying levels of correctness; making their practical reliability far from guaranteed. To quantify this uncertainty, we systematically evaluate four approaches for confidence estimation in LLM outputs: VCE, MSP, Sample Consistency, and CoCoA (Vashurin et al., 2025). For the evaluation of the approaches, we conduct experiments on four question-answering tasks using a state-of-the-art open-source LLM. Our results show that each uncertainty metric captures a different facet of model confidence and that the hybrid CoCoA approach yields the best reliability overall, improving both calibration and discrimination of correct answers. We discuss the trade-offs of each method and provide recommendations for selecting uncertainty measures in LLM applications.

misc


Preprint

Oct. 2025

Authors

C. Hobelsberger • T. Winner • A. Nawroth • O. Mitevski • A.-C. Haensch

Links

arXiv

Research Area

 C4 | Computational Social Sciences

BibTeXKey: HWN+25a

Back to Top