Home  | News

16.10.2025

Teaser image to SIC: Making AI Image Classification Understandable

SIC: Making AI Image Classification Understandable

MCML Research Insight - With Tom Nuno Wolf, Emre Kavak, Fabian Bongratz, and Christian Wachinger

Deep learning models are emerging more and more in everyday life, going as far as assisting clinicians in their diagnosis. However, their black box nature prevents understanding errors and decision-making, which arguably are as important as high accuracy in decision-critical tasks. Previous research typically focused on either designing models to intuitively reason by example or on providing theoretically grounded pixel-level and rather unintuitive explanations.

Successful human-AI collaboration in medicine requires trust and clarity. To replace confusing AI tools that increase clinicians’ cognitive load, MCML Junior members Tom Nuno Wolf, Emre Kavak, Fabian Bongratz, and MCML PI Christian Wachinger created SIC for their collaborators at TUM Klinikum Rechts der Isar. SIC is a fully transparent classifier built to make AI-assisted image classification both intuitive and provably reliable.


«Currently, clinicians are severely overworked. Hence, AI-assisting tools must reduce the workload rather than introducing additional cognitive load.»


Tom Nuno Wolf et al.

MCML Junior Members

The Best of Both Worlds: Combining Intuition with Rigor

Imagine a radiologist identifying a condition. They instinctively compare the scan to thousands of past cases they’ve seen, a process known as case-based reasoning.

SIC leverages the same intuition and integrates a similarity-based classification mechanism and B-cos neural networks, which provide faithful, pixel-level contribution maps. First, SIC learns a set of class-representative latent vectors to act as “textbook” examples (Support Samples). A test sample is classified by computing and summing similarity scores of its latent vector and the latent vectors of the support samples.

As shown in Figure 1, this provides multifaceted explanations that include the predicted class’s support samples and contribution maps, their numerical evidence, and the test sample’s contribution maps.

The multi-faceted explanation provided by SIC

Figure 1: The multi-faceted explanation provided by SIC. For a given test image, SIC provides a set of learned Support Samples for each class. The Contribution Maps are generated via the B-cos encoder, faithfully highlighting the pixels that contribute to the similarity score between the test sample and the latent vectors of the support samples. The Evidence score quantifies this similarity, showing the influence of each Support Sample on the final classification. This allows a user to interrogate the model's decision by examining which Support Samples were most influential and what specific image features drove that influence.


«In addition to reducing cognitive load, we believe that heuristical explanations should be abstained from in the medical domain, as the outcome of false information is potentially life-threatening. We balanced these opposed interests in our work, which we are enthusiastic to evaluate in a medical user study next.»


Tom Nuno Wolf et al.

MCML Junior Members

Findings and Implications for Medical Image Analysis

It is often argued that interpretability comes at the cost of model performance. However, researchers working in the domain have continuously provided evidence that may be a misconception. The authors showed that SIC achieves comparable performance across a number of tasks, ranging from fine-grained to multi-label to medical classification. Moreover, the theoretical evaluation shows that the explanations satisfy established axioms, which manifest in their empirical evaluation with the synthetic FunnyBirds framework. These results and findings are what the authors were looking for in interpretable methods for deep learning - a transparent classifier providing theoretically grounded and easily accessible explanations for deployment in clinical settings.


Interested in Exploring Further?

Check out the code and the full paper accepted at the A*-conference ICCV 2025, one of the most prestigious conferences in the field of computer vision.

T. N. WolfE. KavakF. BongratzC. Wachinger
SIC: Similarity-Based Interpretable Image Classification with Neural Networks.
ICCV 2025 - IEEE/CVF International Conference on Computer Vision. Honolulu, Hawai’i, Oct 19-23, 2025. To be published. Preprint available. URL GitHub
SIC Code on GitHub

Share Your Research!


Get in touch with us!

Are you an MCML Junior Member and interested in showcasing your research on our blog?

We’re happy to feature your work—get in touch with us to present your paper.

#blog #research #wachinger
Subscribe to RSS News feed

Related

Link to Research on human-centred Exosuit technology highlighted in Börsen-Zeitung

03.11.2025

Research on Human-Centred Exosuit Technology Highlighted in Börsen-Zeitung

Julian Rodemann worked with Harvard on interpretable algorithms for “Back Exosuits,” improving human–machine interaction.

Link to

02.11.2025

MCML at EMNLP 2025

MCML researchers are represented with 37 papers at EMNLP 2025 (17 Main, 13 Findings, and 7 Workshops).

Link to Language Shapes Gender Bias in AI Images

30.10.2025

Language Shapes Gender Bias in AI Images

Alexander Fraser shows AI image generators reproduce gender stereotypes differently across languages, highlighting the need for fair multilingual AI.

Link to Barbara Plank Featured on ARD

26.10.2025

Barbara Plank Featured on ARD

MCML PI Barbara Plank featured on ARD, highlighting AI challenges in understanding regional dialects.

Link to Unai Fischer-Abaigar Featured on Executive Code

26.10.2025

Unai Fischer-Abaigar Featured on Executive Code

MCML Junior Member Unai Fischer-Abaigar featured on Executive Code, exploring AI in government resource allocation and public program outcomes.

Back to Top