Home  | Publications | RKC+24

Interpretable Machine Learning for TabPFN

MCML Authors

Matthias Feurer

Prof. Dr.

Thomas Bayes Fellow

* Former Thomas Bayes Fellow

Link to Profile Thomas Nagler

Thomas Nagler

Prof. Dr.

Principal Investigator

Link to Profile David Rügamer PI Matchmaking

David Rügamer

Prof. Dr.

Principal Investigator

Abstract

The recently developed Prior-Data Fitted Networks (PFNs) have shown very promising results for applications in low-data regimes. The TabPFN model, a special case of PFNs for tabular data, is able to achieve state-of-the-art performance on a variety of classification tasks while producing posterior predictive distributions in mere seconds by in-context learning without the need for learning parameters or hyperparameter tuning. This makes TabPFN a very attractive option for a wide range of domain applications. However, a major drawback of the method is its lack of interpretability. Therefore, we propose several adaptations of popular interpretability methods that we specifically design for TabPFN. By taking advantage of the unique properties of the model, our adaptations allow for more efficient computations than existing implementations. In particular, we show how in-context learning facilitates the estimation of Shapley values by avoiding approximate retraining and enables the use of Leave-One-Covariate-Out (LOCO) even when working with large-scale Transformers. In addition, we demonstrate how data valuation methods can be used to address scalability challenges of TabPFN.

inproceedings


xAI 2024

2nd World Conference on Explainable Artificial Intelligence. Valletta, Malta, Jul 17-19, 2024.

Authors

D. RundelJ. Kobialka • C. von Crailsheim • M. FeurerT. NaglerD. Rügamer

Links

DOI GitHub

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: RKC+24

Back to Top