Home  | Publications | DTB25

Why Uncertainty Calibration Matters for Reliable Perturbation-Based Explanations

MCML Authors

Abstract

Perturbation-based explanations are widely utilized to enhance the transparency of modern machine-learning models. However, their reliability is often compromised by the unknown model behavior under the specific perturbations used. This paper investigates the relationship between uncertainty calibration - the alignment of model confidence with actual accuracy - and perturbation-based explanations. We show that models frequently produce unreliable probability estimates when subjected to explainability-specific perturbations and theoretically prove that this directly undermines explanation quality. To address this, we introduce ReCalX, a novel approach to recalibrate models for improved perturbation-based explanations while preserving their original predictions. Experiments on popular computer vision models demonstrate that our calibration strategy produces explanations that are more aligned with human perception and actual object locations.

inproceedings


XAI4Science @ICLR 2025

Workshop XAI4Science: From Understanding Model Behavior to Discovering New Scientific Knowledge at the 13th International Conference on Learning Representations. Singapore, Apr 24-28, 2025.

Authors

T. DeckerV. Tresp • F. Buettner

Links

URL

Research Area

 A3 | Computational Models

BibTeXKey: DTB25

Back to Top