Home  | Publications | SCC+26

Linear Script Representations in Speech Foundation Models Enable Zero-Shot Transliteration

MCML Authors

Link to Profile Michael Hedderich PI Matchmaking

Michael Hedderich

Dr.

JRG Leader Human-Centered NLP

Link to Profile Barbara Plank PI Matchmaking

Barbara Plank

Prof. Dr.

Principal Investigator

Abstract

Multilingual speech foundation models such as Whisper are trained on web-scale data, where data for each language consists of a myriad of regional varieties. However, different regional varieties often employ different scripts to write the same language, rendering speech recognition output also subject to non-determinism in the output script. To mitigate this problem, we show that script is linearly encoded in the activation space of multilingual speech models, and that modifying activations at inference time enables direct control over output script. We find the addition of such script vectors to activations at test time can induce script change even in unconventional language-script pairings (e.g. Italian in Cyrillic and Japanese in Latin script). We apply this approach to inducing post-hoc control over the script of speech recognition output, where we observe competitive performance across all model sizes of Whisper.

misc SCC+26


Preprint

Jan. 2026

Authors

R. S.-E. Shim • K. Choi • K. Chang • M.-H. Hsu • F. Eichin • Z. Wu • A. Suhr • M. A. Hedderich • D. Harwath • D. R. Mortensen • B. Plank

Links

arXiv

Research Area

 B2 | Natural Language Processing

BibTeXKey: SCC+26

Back to Top