Home  | Publications | KND+26

GlotOCR Bench: OCR Models Still Struggle Beyond a Handful of Unicode Scripts

MCML Authors

Link to Profile Jana Diesner

Jana Diesner

Prof. Dr.

Collaborating PI

Link to Profile Hinrich Schütze

Hinrich Schütze

Prof. Dr.

Core PI

Abstract

Optical character recognition (OCR) has advanced rapidly with the rise of visionlanguage models, yet evaluation has remained concentrated on a small cluster of high- and mid-resource scripts. We introduce GlotOCR Bench, a comprehensive benchmark evaluating OCR generalization across 100+ Unicode scripts. Our benchmark comprises clean and degraded image variants rendered from real multilingual texts. Images are rendered using fonts from the Google Fonts repository, shaped with HarfBuzz and rasterized with FreeType, supporting both LTR and RTL scripts. Samples of rendered images were manually reviewed to verify correct rendering across all scripts. We evaluate a broad suite of open-weight and proprietary vision-language models and find that most perform well on fewer than ten scripts, and even the strongest frontier models fail to generalize beyond thirty scripts. Performance broadly tracks script-level pretraining coverage, suggesting that current OCR systems rely on language model pretraining as much as on visual recognition. Models confronted with unfamiliar scripts either produce random noise or hallucinate characters from similar scripts they already know. We release the benchmark and pipeline for reproducibility.

misc KND+26


Preprint

Apr. 2026

Authors

A. H. KargaranN. NikeghbalJ. Diesner • F. Yvon • H. Schütze

Links

arXiv GitHub

Research Areas

 B2 | Natural Language Processing

 C4 | Computational Social Sciences

BibTeXKey: KND+26

Back to Top