Home  | Publications | DEV+25

Technical Considerations for XAI in AI Governance

MCML Authors

Abstract

This paper highlights crucial technical considerations when applying explainable artificial intelligence (XAI) methods in AI governance to explain black-box supervised machine learning models. We emphasize that their application in AI governance involves technical nuances that, if overlooked, can yield misleading interpretations. We highlight key factors to consider in AI governance for a non-technical audience, using a conceptual example: Feature importance methods explain an AI model that automatically invites job interview candidates based on the applicant's CV. By highlighting common pitfalls, we aim to better align the demands of AI governance with XAI methods.

inproceedings DEV+25


PAIG @EurIPS 2025

Workshop on Private AI Governance at the European Conference on Information Processing Systems. Copenhagen, Denmark, Dec 03-05, 2025.

Authors

S. Dandl • F. K. Ewald • E. Valero-Leal • B. Bischl • K. Blesch

Links

URL

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: DEV+25

Back to Top