Home  | Publications | CLZ+26

AUVIC: Adversarial Unlearning of Visual Concepts for Multi-Modal Large Language Models

MCML Authors

Abstract

Multimodal Large Language Models (MLLMs) achieve impressive performance once optimized on massive datasets. Such datasets often contain sensitive or copyrighted content, raising significant data privacy concerns. Regulatory frameworks mandating the 'right to be forgotten' drive the need for machine unlearning. This technique allows for the removal of target data without resource-consuming retraining. However, while well-studied for text, visual concept unlearning in MLLMs remains underexplored. A primary challenge is precisely removing a target visual concept without disrupting model performance on related entities. To address this, we introduce AUVIC, a novel visual concept unlearning framework for MLLMs. AUVIC applies adversarial perturbations to enable precise forgetting. This approach effectively isolates the target concept while avoiding unintended effects on similar entities. To evaluate our method, we construct VCUBench. It is the first benchmark designed to assess visual concept unlearning in group contexts. Experimental results demonstrate that AUVIC achieves state-of-the-art target forgetting rates while incurs minimal performance degradation on non-target concepts.

inproceedings CLZ+26


AAAI 2026

40th Conference on Artificial Intelligence. Singapore, Jan 20-27, 2026. To be published. Preprint available.
Conference logo
A* Conference

Authors

H. Chen • J. Li • Y. ZhangJ. Bi • Y. Xia • J. Gu • V. Tresp

Links

arXiv

Research Area

 A3 | Computational Models

BibTeXKey: CLZ+26

Back to Top