Home  | Publications | HDE+23

Leveraging Model-Based Trees as Interpretable Surrogate Models for Model Distillation

MCML Authors

Abstract

Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models via model distillation. This paper focuses on using model-based trees as surrogate models which partition the feature space into interpretable regions via decision rules. Within each region, interpretable models based on additive main effects are used to approximate the behavior of the black box model, striking for an optimal balance between interpretability and performance. Four model-based tree algorithms, namely SLIM, GUIDE, MOB, and CTree, are compared regarding their ability to generate such surrogate models. We investigate fidelity, interpretability, stability, and the algorithms’ capability to capture interaction effects through appropriate splits. Based on our comprehensive analyses, we finally provide an overview of user-specific recommendations.

inproceedings


ECAI 2023

3rd International Workshop on Explainable and Interpretable Machine Learning co-located with the 26th European Conference on Artificial Intelligence. Kraków, Poland, Sep 30-Oct 04, 2023.
Conference logo
A Conference

Authors

J. HerbingerS. DandlF. K. Ewald • S. Loibl • G. Casalicchio

Links

DOI

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: HDE+23

Back to Top