Home  | Publications | SZY+25

Medical Multimodal Model Stealing Attacks via Adversarial Domain Alignment

MCML Authors

Abstract

Medical multimodal large language models (MLLMs) are becoming an instrumental part of healthcare systems, assisting medical personnel with decision making and results analysis. Models for radiology report generation are able to interpret medical imagery, thus reducing the workload of radiologists. As medical data is scarce and protected by privacy regulations, medical MLLMs represent valuable intellectual property. However, these assets are potentially vulnerable to model stealing, where attackers aim to replicate their functionality via black-box access. So far, model stealing for the medical domain has focused on classification; however, existing attacks are not effective against MLLMs. In this paper, we introduce Adversarial Domain Alignment (ADA-STEAL), the first stealing attack against medical MLLMs. ADA-STEAL relies on natural images, which are public and widely available, as opposed to their medical counterparts. We show that data augmentation with adversarial noise is sufficient to overcome the data distribution gap between natural images and the domain-specific distribution of the victim MLLM. Experiments on the IU X-RAY and MIMIC-CXR radiology datasets demonstrate that Adversarial Domain Alignment enables attackers to steal the medical MLLM without any access to medical data.

inproceedings


AAAI 2025

39th Conference on Artificial Intelligence. Philadelphia, PA, USA, Feb 25-Mar 04, 2025.
Conference logo
A* Conference

Authors

Y. Shen • Z. Zhuang • K. Yuan • M.-I. Nicolae • N. Navab • N. Padoy • M. Fritz

Links

DOI

Research Area

 C1 | Medicine

BibTeXKey: SZY+25

Back to Top