Nora Kassner
* Former Member
This dissertation studies world and commonsense knowledge in pretrained language models, focusing on improving knowledge consistency and completeness. It reveals that LMs often produce self-contradictory answers and proposes a “symbolic executive” architecture that helps models maintain coherent beliefs over time. Additionally, it explores retrieval-based augmentation, reasoning during pretraining, and integration of new entities to enhance factual coverage, moving toward language models with more consistent and evolving world knowledge. (Shortened).
BibTeXKey: Kas25