Home  | Publications | WAL+23

GradSim: Gradient-Based Language Grouping for Effective Multilingual Training

MCML Authors

Link to Profile Hinrich Schütze PI Matchmaking

Hinrich Schütze

Prof. Dr.

Principal Investigator

Abstract

Most languages of the world pose low-resource challenges to natural language processing models. With multilingual training, knowledge can be shared among languages. However, not all languages positively influence each other and it is an open research question how to select the most suitable set of languages for multilingual training and avoid negative interference among languages whose characteristics or data distributions are not compatible. In this paper, we propose GradSim, a language grouping method based on gradient similarity. Our experiments on three diverse multilingual benchmark datasets show that it leads to the largest performance gains compared to other similarity measures and it is better correlated with cross-lingual model performance. As a result, we set the new state of the art on AfriSenti, a benchmark dataset for sentiment analysis on low-resource African languages. In our extensive analysis, we further reveal that besides linguistic features, the topics of the datasets play an important role for language grouping and that lower layers of transformer models encode language-specific features while higher layers capture task-specific information.

inproceedings


EMNLP 2023

Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023.
Conference logo
A* Conference

Authors

M. Wang • H. Adel • L. Lange • J. Strötgen • H. Schütze

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: WAL+23

Back to Top