Home  | Publications | KIY+23

GlotLID: Language Identification for Low-Resource Languages

MCML Authors

Abstract

Several recent papers have published good solutions for language identification (LID) for about 300 high-resource and medium-resource languages. However, there is no LID available that (i) covers a wide range of low-resource languages, (ii) is rigorously evaluated and reliable and (iii) efficient and easy to use. Here, we publish GlotLID-M, an LID model that satisfies the desiderata of wide coverage, reliability and efficiency. It identifies 1665 languages, a large increase in coverage compared to prior work. In our experiments, GlotLID-M outperforms four baselines (CLD3, FT176, OpenLID and NLLB) when balancing F1 and false positive rate (FPR). We analyze the unique challenges that low-resource LID poses: incorrect corpus metadata, leakage from high-resource languages, difficulty separating closely related languages, handling of macrolanguage vs varieties and in general noisy data. We hope that integrating GlotLID-M into dataset creation pipelines will improve quality and enhance accessibility of NLP technology for low-resource languages and cultures.

inproceedings


Findings @EMNLP 2023

Findings of the Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023.
Conference logo
A* Conference

Authors

A. H. KargaranA. Imani • F. Yvon • H. Schütze

Links

DOI GitHub

Research Area

 B2 | Natural Language Processing

BibTeXKey: KIY+23

Back to Top