15.09.2021

MCML Researchers With Two Papers at MICCAI 2021
24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). Strasbourg, France, 27.09.2021–01.10.2021
We are happy to announce that MCML researchers are represented with two papers at MICCAI 2021. Congrats to our researchers!
Main Track (2 papers)
Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models.
MICCAI 2021 - 24th International Conference on Medical Image Computing and Computer Assisted Intervention. Strasbourg, France, Sep 27-Oct 01, 2021. DOI GitHub
Abstract
Convolutional neural networks are showing promise in the automatic diagnosis of thoracic pathologies on chest x-rays. Their black-box nature has sparked many recent works to explain the prediction via input feature attribution methods (aka saliency methods). However, input feature attribution methods merely identify the importance of input regions for the prediction and lack semantic interpretation of model behavior. In this work, we first identify the semantics associated with internal units (feature maps) of the network. We proceed to investigate the following questions; Does a regression model that is only trained with COVID-19 severity scores implicitly learn visual patterns associated with thoracic pathologies? Does a network that is trained on weakly labeled data (e.g. healthy, unhealthy) implicitly learn pathologies? Moreover, we investigate the effect of pretraining and data imbalance on the interpretability of learned features. In addition to the analysis, we propose semantic attribution to semantically explain each prediction. We present our findings using publicly available chest pathologies (CheXpert [5], NIH ChestX-ray8 [25]) and COVID-19 datasets (BrixIA [20], and COVID-19 chest X-ray segmentation dataset [4]).
MCML Authors

Ashkan Khakzar
Dr.
Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features.
MICCAI 2021 - 24th International Conference on Medical Image Computing and Computer Assisted Intervention. Strasbourg, France, Sep 27-Oct 01, 2021. DOI GitHub
Abstract
Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays. In order to establish trust in the clinical routine, the networks’ prediction mechanism needs to be interpretable. One principal approach to interpretation is feature attribution. Feature attribution methods identify the importance of input features for the output prediction. Building on Information Bottleneck Attribution (IBA) method, for each prediction we identify the chest X-ray regions that have high mutual information with the network’s output. Original IBA identifies input regions that have sufficient predictive information. We propose Inverse IBA to identify all informative regions. Thus all predictive cues for pathologies are highlighted on the X-rays, a desirable property for chest X-ray diagnosis. Moreover, we propose Regression IBA for explaining regression models. Using Regression IBA we observe that a model trained on cumulative severity score labels implicitly learns the severity of different X-ray regions. Finally, we propose Multi-layer IBA to generate higher resolution and more detailed attribution/saliency maps. We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-agnostic feature importance metrics on NIH Chest X-ray8 and BrixIA datasets.
MCML Authors

Ashkan Khakzar
Dr.
15.09.2021
Related

15.09.2025
Robots Seeing in the Dark - With Researcher Yannick Burkhardt
Yannick Burkhardt erforscht Event-Kameras, die Robotern ermöglichen, blitzschnell zu reagieren und auch im Dunkeln zu sehen.

12.09.2025
MCML Researchers With Eight Papers at ECML-PKDD 2025
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Database (ECML-PKDD 2025). Porto, Portugal, 15.09.2025 - 19.09.2025


08.09.2025
Niki Kilbertus Receives Prestigious ERC Starting Grant
Niki Kilbertus wins ERC Starting Grant for his DYNAMICAUS project on causal AI and scientific modeling.

08.09.2025
3D Machine Perception Beyond Vision - With Researcher Riccardo Marin
Researcher Riccardo Marin explores 3D geometry and AI, from manufacturing to VR, making machine perception more human-like.