This paper highlights crucial technical considerations when applying explainable artificial intelligence (XAI) methods in AI governance to explain black-box supervised machine learning models. We emphasize that their application in AI governance involves technical nuances that, if overlooked, can yield misleading interpretations. We highlight key factors to consider in AI governance for a non-technical audience, using a conceptual example: Feature importance methods explain an AI model that automatically invites job interview candidates based on the applicant's CV. By highlighting common pitfalls, we aim to better align the demands of AI governance with XAI methods.
inproceedings DEV+25
BibTeXKey: DEV+25