Home  | Publications | KZM+21

Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features

MCML Authors

Abstract

Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays. In order to establish trust in the clinical routine, the networks’ prediction mechanism needs to be interpretable. One principal approach to interpretation is feature attribution. Feature attribution methods identify the importance of input features for the output prediction. Building on Information Bottleneck Attribution (IBA) method, for each prediction we identify the chest X-ray regions that have high mutual information with the network’s output. Original IBA identifies input regions that have sufficient predictive information. We propose Inverse IBA to identify all informative regions. Thus all predictive cues for pathologies are highlighted on the X-rays, a desirable property for chest X-ray diagnosis. Moreover, we propose Regression IBA for explaining regression models. Using Regression IBA we observe that a model trained on cumulative severity score labels implicitly learns the severity of different X-ray regions. Finally, we propose Multi-layer IBA to generate higher resolution and more detailed attribution/saliency maps. We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-agnostic feature importance metrics on NIH Chest X-ray8 and BrixIA datasets.

inproceedings


MICCAI 2021

24th International Conference on Medical Image Computing and Computer Assisted Intervention. Strasbourg, France, Sep 27-Oct 01, 2021.
Conference logo
A Conference

Authors

A. Khakzar • Y. Zhang • W. Mansour • Y. Cai • Y. Li • Y. Zhang • S. T. Kim • N. Navab

Links

DOI GitHub

Research Areas

 A1 | Statistical Foundations & Explainability

 C1 | Medicine

BibTeXKey: KZM+21

Back to Top