Home  | Publications | WAB+26

ASCD: Attention-Steerable Contrastive Decoding for Reducing Hallucination in MLLM

MCML Authors

Abstract

Multimodal large language models (MLLMs) frequently hallucinate by over-committing to spurious visual cues. Prior remedies–Visual and Instruction Contrastive Decoding (VCD, ICD)–mitigate this issue, yet the mechanism remains opaque. We first empirically show that their improvements systematically coincide with redistributions of cross-modal attention. Building on this insight, we propose Attention-Steerable Contrastive Decoding (ASCD), which directly steers the attention scores during decoding. ASCD combines (i) positive steering, which amplifies automatically mined text-centric heads–stable within a model and robust across domains–with (ii) negative steering, which dampens on-the-fly identified critical visual tokens. The method incurs negligible runtime/memory overhead and requires no additional training. Across five MLLM backbones and three decoding schemes, ASCD reduces hallucination on POPE, CHAIR, and MMHal-Bench by up to 38.2% while improving accuracy on standard VQA benchmarks, including MMMU, MM-VET, ScienceQA, TextVQA, and GQA. These results position attention steering as a simple, model-agnostic, and principled route to safer, more faithful multimodal generation.

inproceedings WAB+26


AAAI 2026

40th Conference on Artificial Intelligence. Singapore, Jan 20-27, 2026.
Conference logo
A* Conference

Authors

Y. Wang •  Aniri • J. Bi • S. Pirk • Y. Ma

Links

DOI

Research Area

 A3 | Computational Models

BibTeXKey: WAB+26

Back to Top