Home  | Publications | TRB+25

WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs

MCML Authors

Abstract

Keeping large language models factually up-to-date is crucial for deployment, yet costly retraining remains a challenge. Knowledge editing offers a promising alternative, but methods are only tested on small-scale or synthetic edit benchmarks. In this work, we aim to bridge research into lifelong knowledge editing to real-world edits at practically relevant scale. We first introduce WikiBigEdit; a large-scale benchmark of real-world Wikidata edits, built to automatically extend lifelong for future-proof benchmarking. In its first instance, it includes over 500K question-answer pairs for knowledge editing alongside a comprehensive evaluation pipeline. Finally, we use WikiBigEdit to study existing knowledge editing techniques' ability to incorporate large volumes of real-world facts and contrast their capabilities to generic modification techniques such as retrieval augmentation and continual finetuning to acquire a complete picture of the practical extent of current lifelong knowledge editing.

inproceedings


ICML 2025

42nd International Conference on Machine Learning. Vancouver, Canada, Jul 13-19, 2025.
Conference logo
A* Conference

Authors

L. Thede • K. Roth • M. Bethge • Z. Akata • T. Hartvigsen

Links

URL

In Collaboration

partnerlogo

Research Area

 B1 | Computer Vision

BibTeXKey: TRB+25

Back to Top