Home  | Publications | Her23

On Grouping and Partitioning Approaches in Interpretable Machine Learning

MCML Authors

Abstract

This thesis addresses the challenges of interpreting machine learning models, particularly focusing on the limitations of global explanation methods. It identifies two key issues: the human-incomprehensibility of high-dimensional outputs and the misleading interpretations caused by aggregation bias. The thesis proposes solutions to these problems, such as grouping features for simpler interpretations and using recursive partitioning algorithms to provide regional explanations, ensuring more accurate and understandable insights into model behavior. (Shortened.)

phdthesis


Dissertation

LMU München. Dec. 2023

Authors

J. Herbinger

Links

DOI

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: Her23

Back to Top