Home  | Publications | DHF23

A Study on Accessing Linguistic Information in Pre-Trained Language Models by Using Prompts

MCML Authors

Link to Profile Alexander Fraser PI Matchmaking

Alexander Fraser

Prof. Dr.

Principal Investigator

Abstract

We study whether linguistic information in pre-trained multilingual language models can be accessed by human language: So far, there is no easy method to directly obtain linguistic information and gain insights into the linguistic principles encoded in such models. We use the technique of prompting and formulate linguistic tasks to test the LM’s access to explicit grammatical principles and study how effective this method is at providing access to linguistic features. Our experiments on German, Icelandic and Spanish show that some linguistic properties can in fact be accessed through prompting, whereas others are harder to capture.

inproceedings


EMNLP 2023

Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023.
Conference logo
A* Conference

Authors

M. Weller-Di Marco • K. HämmerlA. Fraser

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: DHF23

Back to Top