Home  | Publications | ELP+25

Probing LLMs for Multilingual Discourse Generalization Through a Unified Label Set

MCML Authors

Link to Profile Barbara Plank PI Matchmaking

Barbara Plank

Prof. Dr.

Principal Investigator

Link to Profile Michael Hedderich PI Matchmaking

Michael Hedderich

Dr.

JRG Leader Human-Centered NLP

Abstract

Discourse understanding is essential for many NLP tasks, yet most existing work remains constrained by framework-dependent discourse representations. This work investigates whether large language models (LLMs) capture discourse knowledge that generalizes across languages and frameworks. We address this question along two dimensions: (1) developing a unified discourse relation label set to facilitate cross-lingual and cross-framework discourse analysis, and (2) probing LLMs to assess whether they encode generalizable discourse abstractions. Using multilingual discourse relation classification as a testbed, we examine a comprehensive set of 23 LLMs of varying sizes and multilingual capabilities. Our results show that LLMs, especially those with multilingual training corpora, can generalize discourse information across languages and frameworks. Further layer-wise analyses reveal that language generalization at the discourse level is most salient in the intermediate layers. Lastly, our error analysis provides an account of challenging relation classes.

inproceedings


ACL 2025

63rd Annual Meeting of the Association for Computational Linguistics. Vienna, Austria, Jul 27-Aug 01, 2025.
Conference logo
A* Conference

Authors

F. EichinY. J. LiuB. PlankM. A. Hedderich

Links

URL

Research Area

 B2 | Natural Language Processing

BibTeXKey: ELP+25

Back to Top