Home  | Publications | MFH+24a

Explaining Change in Models and Data With Global Feature Importance and Effects

MCML Authors

Abstract

In dynamic machine learning environments, where data streams continuously evolve, traditional explanation methods struggle to remain faithful to the underlying model or data distribution. Therefore, this work presents a unified framework for efficiently computing incremental model-agnostic global explanations tailored for time-dependent models. By extending static model-agnostic methods such as Permutation Feature Importance, SAGE, and Partial Dependence Plots into the online learning context, the proposed framework enables the continuous updating of explanations as new data becomes available. These incremental variants ensure that global explanations remain relevant while minimizing computational overhead. The framework also addresses key challenges related to data distribution maintenance and perturbation generation in online learning, offering time and memory efficient solutions like geometric reservoir-based sampling for data replacement.

inproceedings


TempXAI @ECML-PKDD 2024

Tutorial-Workshop Explainable AI for Time Series and Data Streams at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Vilnius, Lithuania, Sep 09-13, 2024.

Authors

M. Muschalik • F. Fumagalli • B. Hammer • E. Hüllermeier

Links

PDF

Research Area

 A3 | Computational Models

BibTeXKey: MFH+24a

Back to Top