18.08.2025

Tiny logo
Teaser image to Mingyang Wang receives Award at ACL 2025

Mingyang Wang Receives Award at ACL 2025

Award for Impactful Contribution to Reliable and Inclusive NLP

MCML Junior Member Mingyang Wang, PhD student in the group of our PI Hinrich Schütze, has been honored with the SAC Highlights Award at ACL 2025 for the paper “Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models.”

This award recognizes impactful research advancing the understanding of factual consistency across languages in large language models, contributing to more reliable and inclusive NLP systems.

Congratulations from us!

Check out the full paper:

M. Wang, H. Adel, L. Lange, Y. Liu, E. Nie, J. Strötgen and H. Schütze.
Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models.
ACL 2025 - 63rd Annual Meeting of the Association for Computational Linguistics. Vienna, Austria, Jul 27-Aug 01, 2025. URL
Abstract

Multilingual language models (MLMs) store factual knowledge across languages but often struggle to provide consistent responses to semantically equivalent prompts in different languages. While previous studies point out this cross-lingual inconsistency issue, the underlying causes remain unexplored. In this work, we use mechanistic interpretability methods to investigate cross-lingual inconsistencies in MLMs. We find that MLMs encode knowledge in a language-independent concept space through most layers, and only transition to language-specific spaces in the final layers. Failures during the language transition often result in incorrect predictions in the target language, even when the answers are correct in other languages. To mitigate this inconsistency issue, we propose a linear shortcut method that bypasses computations in the final layers, enhancing both prediction accuracy and cross-lingual consistency. Our findings shed light on the internal mechanisms of MLMs and provide a lightweight, effective strategy for producing more consistent factual outputs.

MCML Authors
#award #research #schuetze
Subscribe to RSS News feed

Related

Link to

17.10.2025

MCML at ICCV 2025: 19 Accepted Papers (16 Main, and 3 Workshops)

IEEE/CVF International Conference on Computer Vision (ICCV 2025). Honolulu, Hawaii, 19.10.2025 - 23.10.2025

Link to SIC: Making AI Image Classification Understandable

16.10.2025

SIC: Making AI Image Classification Understandable

SIC by the team of Christian Wachinger at ICCV 2025: Transparent AI for intuitive, reliable, and interpretable medical image classification.

Link to Rethinking AI in Public Institutions - Balancing Prediction and Capacity

09.10.2025

Rethinking AI in Public Institutions - Balancing Prediction and Capacity

Unai Fischer Abaigar explores how AI can make public decisions fairer, smarter, and more effective.

Link to MCML-LAMARR Workshop at University of Bonn

08.10.2025

MCML-LAMARR Workshop at University of Bonn

MCML and Lamarr researchers met in Bonn to exchange ideas on NLP, LLM finetuning, and AI ethics.

Link to Three MCML Members Win Best Paper Award at AutoML 2025

08.10.2025

Three MCML Members Win Best Paper Award at AutoML 2025

Former MCML TBF Matthias Feurer and Director Bernd Bischl’s paper on overtuning won Best Paper at AutoML 2025, offering insights for robust HPO.

Back to Top