Home  | Publications | ZKM+25

Do We Know What LLMs Don't Know? a Study of Consistency in Knowledge Probing

MCML Authors

Link to Profile Michael Hedderich PI Matchmaking

Michael Hedderich

Dr.

JRG Leader Human-Centered NLP

Link to Profile Hinrich Schütze PI Matchmaking

Hinrich Schütze

Prof. Dr.

Principal Investigator

Abstract

The reliability of large language models (LLMs) is greatly compromised by their tendency to hallucinate, underscoring the need for precise identification of knowledge gaps within LLMs. Various methods for probing such gaps exist, ranging from calibration-based to prompting-based methods. To evaluate these probing methods, in this paper, we propose a new process based on using input variations and quantitative metrics. Through this, we expose two dimensions of inconsistency in knowledge gap probing. (1) Intra-method inconsistency: Minimal non-semantic perturbations in prompts lead to considerable variance in detected knowledge gaps within the same probing method; e.g., the simple variation of shuffling answer options can decrease agreement to around 40%. (2) Cross-method inconsistency: Probing methods contradict each other on whether a model knows the answer. Methods are highly inconsistent -- with decision consistency across methods being as low as 7% -- even though the model, dataset, and prompt are all the same. These findings challenge existing probing methods and highlight the urgent need for perturbation-robust probing frameworks.

inproceedings


Findings @EMNLP 2025

Findings of the Conference on Empirical Methods in Natural Language Processing. Suzhou, China, Nov 04-09, 2025. To be published. Preprint available.
Conference logo
A* Conference

Authors

R. ZhaoA. KöksalA. ModarressiM. A. HedderichH. Schütze

Links


Research Area

 B2 | Natural Language Processing

BibTeXKey: ZKM+25

Back to Top