Home  | Publications | WAL+25

Lost in Multilinguality: Dissecting Cross-Lingual Factual Inconsistency in Transformer Language Models

MCML Authors

Abstract

Multilingual language models (MLMs) store factual knowledge across languages but often struggle to provide consistent responses to semantically equivalent prompts in different languages. While previous studies point out this cross-lingual inconsistency issue, the underlying causes remain unexplored. In this work, we use mechanistic interpretability methods to investigate cross-lingual inconsistencies in MLMs. We find that MLMs encode knowledge in a language-independent concept space through most layers, and only transition to language-specific spaces in the final layers. Failures during the language transition often result in incorrect predictions in the target language, even when the answers are correct in other languages. To mitigate this inconsistency issue, we propose a linear shortcut method that bypasses computations in the final layers, enhancing both prediction accuracy and cross-lingual consistency. Our findings shed light on the internal mechanisms of MLMs and provide a lightweight, effective strategy for producing more consistent factual outputs.

inproceedings


ACL 2025

63rd Annual Meeting of the Association for Computational Linguistics. Vienna, Austria, Jul 27-Aug 01, 2025.
Conference logo
A* Conference

Authors

M. Wang • H. Adel • L. Lange • Y. LiuE. Nie • J. Strötgen • H. Schütze

Links

URL

Research Area

 B2 | Natural Language Processing

BibTeXKey: WAL+25

Back to Top