15.09.2021

Teaser image to

MCML Researchers With Two Papers at MICCAI 2021

24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). Strasbourg, France, 27.09.2021–01.10.2021

We are happy to announce that MCML researchers are represented with two papers at MICCAI 2021. Congrats to our researchers!

Main Track (2 papers)

A. Khakzar, S. Musatian, J. Buchberger, I. V. Quiroz, N. Pinger, S. Baselizadeh, S. T. Kim and N. Navab.
Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models.
MICCAI 2021 - 24th International Conference on Medical Image Computing and Computer Assisted Intervention. Strasbourg, France, Sep 27-Oct 01, 2021. DOI GitHub
Abstract

Convolutional neural networks are showing promise in the automatic diagnosis of thoracic pathologies on chest x-rays. Their black-box nature has sparked many recent works to explain the prediction via input feature attribution methods (aka saliency methods). However, input feature attribution methods merely identify the importance of input regions for the prediction and lack semantic interpretation of model behavior. In this work, we first identify the semantics associated with internal units (feature maps) of the network. We proceed to investigate the following questions; Does a regression model that is only trained with COVID-19 severity scores implicitly learn visual patterns associated with thoracic pathologies? Does a network that is trained on weakly labeled data (e.g. healthy, unhealthy) implicitly learn pathologies? Moreover, we investigate the effect of pretraining and data imbalance on the interpretability of learned features. In addition to the analysis, we propose semantic attribution to semantically explain each prediction. We present our findings using publicly available chest pathologies (CheXpert [5], NIH ChestX-ray8 [25]) and COVID-19 datasets (BrixIA [20], and COVID-19 chest X-ray segmentation dataset [4]).

MCML Authors

A. Khakzar, Y. Zhang, W. Mansour, Y. Cai, Y. Li, Y. Zhang, S. T. Kim and N. Navab.
Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features.
MICCAI 2021 - 24th International Conference on Medical Image Computing and Computer Assisted Intervention. Strasbourg, France, Sep 27-Oct 01, 2021. DOI GitHub
Abstract

Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays. In order to establish trust in the clinical routine, the networks’ prediction mechanism needs to be interpretable. One principal approach to interpretation is feature attribution. Feature attribution methods identify the importance of input features for the output prediction. Building on Information Bottleneck Attribution (IBA) method, for each prediction we identify the chest X-ray regions that have high mutual information with the network’s output. Original IBA identifies input regions that have sufficient predictive information. We propose Inverse IBA to identify all informative regions. Thus all predictive cues for pathologies are highlighted on the X-rays, a desirable property for chest X-ray diagnosis. Moreover, we propose Regression IBA for explaining regression models. Using Regression IBA we observe that a model trained on cumulative severity score labels implicitly learns the severity of different X-ray regions. Finally, we propose Multi-layer IBA to generate higher resolution and more detailed attribution/saliency maps. We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-agnostic feature importance metrics on NIH Chest X-ray8 and BrixIA datasets.

MCML Authors


Subscribe to RSS News feed

Related

Link to Machine Learning for Climate Action - with researcher Kerstin Forster

29.09.2025

Machine Learning for Climate Action - With Researcher Kerstin Forster

Kerstin Forster researches how AI can cut emissions, boost renewable energy, and drive corporate sustainability.

Link to Björn Ommer featured in WELT

26.09.2025

Björn Ommer Featured in WELT

MCML PI Björn Ommer told WELT that AI can never be entirely neutral and that human judgment remains essential.

Link to Björn Schuller featured in Macwelt article

25.09.2025

Björn Schuller Featured in Macwelt Article

MCML PI Björn Schuller discusses in Macwelt how Apple Watch monitors health, detects subtle changes, and supports early intervention.

Link to MCML PI Björn Ommer featured on ZDF NANO Talk

24.09.2025

MCML PI Björn Ommer Featured on ZDF NANO Talk

MCML PIs Björn Ommer & Alena Buyx discuss AI’s essence on ZDF NANO Talk, covering tech, ethics, and societal impact.

Link to Benjamin Lange Explores Opportunities and Risks of AI Agents

23.09.2025

Benjamin Lange Explores Opportunities and Risks of AI Agents

Benjamin Lange highlights both opportunities and ethical risks of AI agents and calls for clear rules to ensure they benefit society.

Back to Top