Home  | Publications | SHS+22

Don't Forget Cheap Training Signals Before Building Unsupervised Bilingual Word Embeddings

MCML Authors

Masoud Jalili Sabet

Dr.

Link to Profile Alexander Fraser PI Matchmaking

Alexander Fraser

Prof. Dr.

Principal Investigator

Link to Profile Hinrich Schütze PI Matchmaking

Hinrich Schütze

Prof. Dr.

Principal Investigator

Abstract

Bilingual Word Embeddings (BWEs) are one of the cornerstones of cross-lingual transfer of NLP models. They can be built using only monolingual corpora without supervision leading to numerous works focusing on unsupervised BWEs. However, most of the current approaches to build unsupervised BWEs do not compare their results with methods based on easy-to-access cross-lingual signals. In this paper, we argue that such signals should always be considered when developing unsupervised BWE methods. The two approaches we find most effective are: 1) using identical words as seed lexicons (which unsupervised approaches incorrectly assume are not available for orthographically distinct language pairs) and 2) combining such lexicons with pairs extracted by matching romanized versions of words with an edit distance threshold. We experiment on thirteen non-Latin languages (and English) and show that such cheap signals work well and that they outperform using more complex unsupervised methods on distant language pairs such as Chinese, Japanese, Kannada, Tamil, and Thai. In addition, they are even competitive with the use of high-quality lexicons in supervised approaches. Our results show that these training signals should not be neglected when building BWEs, even for distant languages.

inproceedings


BUCC @LREC 2022

15th Workshop on Building and Using Comparable Corpora at the 13th International Conference on Language Resources and Evaluation. Marseille, France, Jun 21-23, 2022.

Authors

S. Severini • V. HangyaM. J. SabetA. FraserH. Schütze

Links

URL

Research Area

 B2 | Natural Language Processing

BibTeXKey: SHS+22

Back to Top