Home  | Publications | TKS+24

Causal Effect Identification in LiNGAM Models With Latent Confounders

MCML Authors

Link to Profile Mathias Drton PI Matchmaking

Mathias Drton

Prof. Dr.

Principal Investigator

Abstract

We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance. Regularization is key in deep learning, especially when training complex models on relatively small datasets. In order to understand inner workings of neural networks, attribution methods such as Layer-wise Relevance Propagation (LRP) have been extensively studied, particularly for interpreting the relevance of input features. We introduce Challenger, a module that leverages the explainable power of attribution maps in order to manipulate particularly relevant input patterns. Therefore, exposing and subsequently resolving regions of ambiguity towards separating classes on the ground-truth data manifold, an issue that arises particularly when training models on rather small datasets. Our Challenger module increases model performance through building more diverse filters within the network and can be applied to any input data domain. We demonstrate that our approach results in substantially better classification as well as calibration performance on datasets with only a few samples up to datasets with thousands of samples. In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.

inproceedings


ICML 2024

41st International Conference on Machine Learning. Vienna, Austria, Jul 21-27, 2024.
Conference logo
A* Conference

Authors

D. Tramontano • Y. Kivva • S. Salehkaleybar • M. Drton • N. Kiyavash

Links

URL

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: TKS+24

Back to Top