Home  | Publications | KMK+26

When Meanings Meet: Investigating the Emergence and Quality of Shared Concept Spaces During Multilingual Language Model Training

MCML Authors

Abstract

Training Large Language Models (LLMs) with high multilingual coverage is becoming increasingly important -- especially when monolingual resources are scarce. Recent studies have found that LLMs process multilingual inputs in shared concept spaces, thought to support generalization and cross-lingual transfer. However, these prior studies often do not use causal methods, lack deeper error analysis or focus on the final model only, leaving open how these spaces emerge during training. We investigate the development of language-agnostic concept spaces during pretraining of EuroLLM through the causal interpretability method of activation patching. We isolate cross-lingual concept representations, then inject them into a translation prompt to investigate how consistently translations can be altered, independently of the language. We find that shared concept spaces emerge early and continue to refine, but that alignment with them is language-dependent. Furthermore, in contrast to prior work, our fine-grained manual analysis reveals that some apparent gains in translation quality reflect shifts in behavior -- like selecting senses for polysemous words or translating instead of copying cross-lingual homographs -- rather than improved translation ability. Our findings offer new insight into the training dynamics of cross-lingual alignment and the conditions under which causal interpretability methods offer meaningful insights in multilingual contexts.

inproceedings KMK+26


EACL 2026

19th Conference of the European Chapter of the Association for Computational Linguistics. Rabat, Morocco, Apr 24-29, 2026. To be published. Preprint available.
Conference logo
A Conference

Authors

F. Körner • M. Müller-Eberstein • A. Korhonen • B. Plank

Links

arXiv

Research Area

 B2 | Natural Language Processing

BibTeXKey: KMK+26

Back to Top