16.10.2025
SIC: Making AI Image Classification Understandable
MCML Research Insight - With Tom Nuno Wolf, Emre Kavak, Fabian Bongratz, and Christian Wachinger
Deep learning models are emerging more and more in everyday life, going as far as assisting clinicians in their diagnosis. However, their black box nature prevents understanding errors and decision-making, which arguably are as important as high accuracy in decision-critical tasks. Previous research typically focused on either designing models to intuitively reason by example or on providing theoretically grounded pixel-level and rather unintuitive explanations.
Successful human-AI collaboration in medicine requires trust and clarity. To replace confusing AI tools that increase clinicians’ cognitive load, MCML Junior members Tom Nuno Wolf, Emre Kavak, Fabian Bongratz, and MCML PI Christian Wachinger created SIC for their collaborators at TUM Klinikum Rechts der Isar. SIC is a fully transparent classifier built to make AI-assisted image classification both intuitive and provably reliable.
«Currently, clinicians are severely overworked. Hence, AI-assisting tools must reduce the workload rather than introducing additional cognitive load.»
Tom Nuno Wolf et al.
MCML Junior Members
The Best of Both Worlds: Combining Intuition with Rigor
Imagine a radiologist identifying a condition. They instinctively compare the scan to thousands of past cases they’ve seen, a process known as case-based reasoning.
SIC leverages the same intuition and integrates a similarity-based classification mechanism and B-cos neural networks, which provide faithful, pixel-level contribution maps. First, SIC learns a set of class-representative latent vectors to act as “textbook” examples (Support Samples). A test sample is classified by computing and summing similarity scores of its latent vector and the latent vectors of the support samples.
As shown in Figure 1, this provides multifaceted explanations that include the predicted class’s support samples and contribution maps, their numerical evidence, and the test sample’s contribution maps.
©Tom Nuno Wolf et al.
Figure 1: The multi-faceted explanation provided by SIC. For a given test image, SIC provides a set of learned Support Samples for each class. The Contribution Maps are generated via the B-cos encoder, faithfully highlighting the pixels that contribute to the similarity score between the test sample and the latent vectors of the support samples. The Evidence score quantifies this similarity, showing the influence of each Support Sample on the final classification. This allows a user to interrogate the model's decision by examining which Support Samples were most influential and what specific image features drove that influence.
«In addition to reducing cognitive load, we believe that heuristical explanations should be abstained from in the medical domain, as the outcome of false information is potentially life-threatening. We balanced these opposed interests in our work, which we are enthusiastic to evaluate in a medical user study next.»
Tom Nuno Wolf et al.
MCML Junior Members
Findings and Implications for Medical Image Analysis
It is often argued that interpretability comes at the cost of model performance. However, researchers working in the domain have continuously provided evidence that may be a misconception. The authors showed that SIC achieves comparable performance across a number of tasks, ranging from fine-grained to multi-label to medical classification. Moreover, the theoretical evaluation shows that the explanations satisfy established axioms, which manifest in their empirical evaluation with the synthetic FunnyBirds framework. These results and findings are what the authors were looking for in interpretable methods for deep learning - a transparent classifier providing theoretically grounded and easily accessible explanations for deployment in clinical settings.
Interested in Exploring Further?
Check out the code and the full paper accepted at the A*-conference ICCV 2025, one of the most prestigious conferences in the field of computer vision.
SIC: Similarity-Based Interpretable Image Classification with Neural Networks.
ICCV 2025 - IEEE/CVF International Conference on Computer Vision. Honolulu, Hawai’i, Oct 19-23, 2025. To be published. Preprint available. URL GitHub
Share Your Research!
Get in touch with us!
Are you an MCML Junior Member and interested in showcasing your research on our blog?
We’re happy to feature your work—get in touch with us to present your paper.
Related
25.11.2025
Daniel Rückert Among the World’s Most Cited Researchers
MCML Director Daniel Rückert is among the world’s most cited researchers for AI in healthcare, part of 17 TUM scientists recognized in 2025.
©MCML
24.11.2025
Research Stay at Stanford University
Kun Yuan spent two months at Stanford with the AI X-Change Program, advancing biomedical vision-language models and launching three joint projects.
20.11.2025
Zigzag Your Way to Faster, Smarter AI Image Generation
ZigMa, introduced by Björn Ommer’s group at ECCV 24, improves high-res AI image and video generation with fast, memory-efficient zigzag scanning.
13.11.2025
Anne-Laure Boulesteix Among the World’s Most Cited Researchers
MCML PI Anne‑Laure Boulesteix named Highly Cited Researcher 2025 for cross-field work, among 17 LMU scholars recognized globally.