Home  | Publications | Sch24

Bridging Gaps in Interpretable Machine Learning: Sensitivity Analysis, Marginal Effects, and Cluster Explanations

MCML Authors

Christian Alexander Scholbeck

Dr.

Abstract

This thesis explores interpretable machine learning (IML) through six papers, bridging the gap between IML and model interpretation in other domains. It presents a generalized framework for model-agnostic interpretation methods, highlights potential pitfalls, and connects IML to sensitivity analysis used in fields like environmental modeling. A novel approach, forward marginal effects (FMEs), is introduced to interpret predictive models at multiple levels, supported by the R package fmeffects. The work also extends IML to unsupervised learning by proposing algorithm-agnostic cluster explanation methods, including two new techniques: SMART and IDEA, for analyzing feature contributions to clustering. (Shortened.)

phdthesis


Dissertation

LMU München. May. 2024

Authors

C. A. Scholbeck

Links

DOI

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: Sch24

Back to Top