Home  | Publications | WBP26

Indirect Question Answering in English, German and Bavarian: A Challenging Task for High- And Low-Resource Languages Alike

MCML Authors

Abstract

Indirectness is a common feature of daily communication, yet is underexplored in NLP research for both low-resource as well as high-resource languages. Indirect Question Answering (IQA) aims at classifying the polarity of indirect answers. In this paper, we present two multilingual corpora for IQA of varying quality that both cover English, Standard German and Bavarian, a German dialect without standard orthography: InQA+, a small high-quality evaluation dataset with hand-annotated labels, and GenIQA, a larger training dataset, that contains artificial data generated by GPT-4o-mini. We find that IQA is a pragmatically hard task that comes with various challenges, based on several experiment variations with multilingual transformer models (mBERT, XLM-R and mDeBERTa). We suggest and employ recommendations to tackle these challenges. Our results reveal low performance, even for English, and severe overfitting. We analyse various factors that influence these results, including label ambiguity, label set and dataset size. We find that the IQA performance is poor in high- (English, German) and low-resource languages (Bavarian) and that it is beneficial to have a large amount of training data. Further, GPT-4o-mini does not possess enough pragmatic understanding to generate high-quality IQA data in any of our tested languages.

inproceedings WBP26


LREC 2026

15th International Conference on Language Resources and Evaluation. Palma de Mallorca, Spain, May 11-16, 2026. To be published. Preprint available.

Authors

M. Winkler • V. BlaschkeB. Plank

Links

arXiv

Research Area

 B2 | Natural Language Processing

BibTeXKey: WBP26

Back to Top