Home  | Publications | SDK+26

Measuring and Aligning Abstraction in Vision-Language Models With Medical Taxonomies

MCML Authors

Abstract

Vision-Language Models show strong zero-shot performance for chest X-ray classification, but standard flat metrics fail to distinguish between clinically minor and severe errors. This work investigates how to quantify and mitigate abstraction errors by leveraging medical taxonomies. We benchmark several state-of-the-art VLMs using hierarchical metrics and introduce Catastrophic Abstraction Errors to capture cross-branch mistakes. Our results reveal substantial misalignment of VLMs with clinical taxonomies despite high flat performance. To address this, we propose risk-constrained thresholding and taxonomy-aware fine-tuning with radial embeddings, which reduce severe abstraction errors to below 2 per cent while maintaining competitive performance. These findings highlight the importance of hierarchical evaluation and representation-level alignment for safer and more clinically meaningful deployment of VLMs.

misc SDK+26


Preprint

Jan. 2026

Authors

B. Schaper • M. Di Folco • B. Kainz • J. A. SchnabelC. I. Bercea

Links

arXiv

Research Area

 C1 | Medicine

BibTeXKey: SDK+26

Back to Top