Home  | Publications | DKS21

Static Embeddings as Efficient Knowledge Bases?

MCML Authors

Link to Profile Hinrich Schütze PI Matchmaking

Hinrich Schütze

Prof. Dr.

Principal Investigator

Abstract

Recent research investigates factual knowledge stored in large pretrained language models (PLMs). Instead of structural knowledge base (KB) queries, masked sentences such as 'Paris is the capital of [MASK]' are used as probes. The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge. In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings. We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs. E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training. One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary. In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary.

inproceedings


NAACL 2021

Annual Conference of the North American Chapter of the Association for Computational Linguistics. Virtual, Jun 06-11, 2021.
Conference logo
A Conference

Authors

P. Dufter • N. KassnerH. Schütze

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: DKS21

Back to Top