Home  | Publications | Kas25

Consistency and Completeness of Knowledge Acquired by Language Models

MCML Authors

Abstract

This dissertation studies world and commonsense knowledge in pretrained language models, focusing on improving knowledge consistency and completeness. It reveals that LMs often produce self-contradictory answers and proposes a “symbolic executive” architecture that helps models maintain coherent beliefs over time. Additionally, it explores retrieval-based augmentation, reasoning during pretraining, and integration of new entities to enhance factual coverage, moving toward language models with more consistent and evolving world knowledge. (Shortened).

phdthesis Kas25


Dissertation

LMU München. Aug. 2025

Authors

N. Kassner

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: Kas25

Back to Top