Home  | Publications | SLZ+24

Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-Training

MCML Authors

Abstract

Multimodal pre-training demonstrates its potential in the medical domain, which learns medical visual representations from paired medical reports. However, many pre-training tasks require extra annotations from clinicians, and most of them fail to explicitly guide the model to learn the desired features of different pathologies. In this paper, we utilize Visual Question Answering (VQA) for multimodal pre-training to guide the framework focusing on targeted pathological features. We leverage descriptions in medical reports to design multi-granular question-answer pairs associated with different diseases, which assist the framework in pre-training without requiring extra annotations from experts. We also propose a novel pre-training framework with a quasi-textual feature transformer, a module designed to transform visual features into a quasi-textual space closer to the textual domain via a contrastive learning strategy. This narrows the vision-language gap and facilitates modality alignment. Our framework is applied to four downstream tasks: report generation, classification, segmentation, and detection across five datasets. Extensive experiments demonstrate the superiority of our framework compared to other state-of-the-art methods.

inproceedings


MICCAI 2024

27th International Conference on Medical Image Computing and Computer Assisted Intervention. Marrakesh, Morocco, Oct 06-10, 2024.
Conference logo
A Conference

Authors

T. Su • J. Li • X. Zhang • H. Jin • H. Chen • Q. Wang • F. Lv • B. Zhao • Y. Hu

Links

DOI GitHub

Research Area

 C1 | Medicine

BibTeXKey: SLZ+24

Back to Top