Home  | Publications | ESF24

CUTE: Measuring LLMs’ Understanding of Their Tokens

MCML Authors

Link to Profile Alexander Fraser PI Matchmaking

Alexander Fraser

Prof. Dr.

Principal Investigator

Abstract

Large Language Models (LLMs) show remarkable performance on a wide variety of tasks. Most LLMs split text into multi-character tokens and process them as atomic units without direct access to individual characters. This raises the question: To what extent can LLMs learn orthographic information? To answer this, we propose a new benchmark, CUTE, which features a collection of tasks designed to test the orthographic knowledge of LLMs. We evaluate popular LLMs on CUTE, finding that most of them seem to know the spelling of their tokens, yet fail to use this information effectively to manipulate text, calling into question how much of this knowledge is generalizable.

inproceedings


EMNLP 2024

Conference on Empirical Methods in Natural Language Processing. Miami, FL, USA, Nov 12-16, 2024.
Conference logo
A* Conference

Authors

L. Edman • H. Schmid • A. Fraser

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: ESF24

Back to Top