Home  | Publications | JLP+24

EMMA-500: Enhancing Massively Multilingual Adaptation of Large Language Models

MCML Authors

Link to Profile Hinrich Schütze PI Matchmaking

Hinrich Schütze

Prof. Dr.

Principal Investigator

Abstract

In this work, we introduce EMMA-500, a large-scale multilingual language model continue-trained on texts across 546 languages designed for enhanced multilingual performance, focusing on improving language coverage for low-resource languages. To facilitate continual pre-training, we compile the MaLA corpus, a comprehensive multilingual dataset enriched with curated datasets across diverse domains. Leveraging this corpus, we conduct extensive continual pre-training of the Llama 2 7B model, resulting in EMMA-500, which demonstrates robust performance across a wide collection of benchmarks, including a comprehensive set of multilingual tasks and PolyWrite, an open-ended generation benchmark developed in this study. Our results highlight the effectiveness of continual pre-training in expanding large language models' language capacity, particularly for underrepresented languages, demonstrating significant gains in cross-lingual transfer, task generalization, and language adaptability.

misc


Preprint

Sep. 2024

Authors

S. Ji • Z. Li • I. Paul • J. Paavola • P. Lin • P. Chen • D. O'Brien • H. Luo • H. Schütze • J. Tiedemann • B. Haddow

Links


Research Area

 B2 | Natural Language Processing

BibTeXKey: JLP+24

Back to Top