Home  | Publications | ZA26

Can Calibration of Positional Encodings Enhance Long Context Utilization?

MCML Authors

Abstract

Large language models suffer from positional biases like the 'Lost in the Middle' (LiM) phenomenon and recency bias, which reduce the effective utilization of long contexts. In this work, we investigate the role of Positional Encodings in this context. Our empirical study confirms the persistence of these biases in modern large language models. Drawing on these findings, we introduce Caliope, a training-free framework for calibrating Positional Encodings at inference time. Our calibrators yield substantial improvements on needle-in-a-haystack and cross-chunk reasoning benchmarks, and offer a practical, lightweight method for improving long-context utilization.

inproceedings ZA26


Findings @EACL 2026

Findings of the 19th Conference of the European Chapter of the Association for Computational Linguistics. Rabat, Morocco, Mar 24-29, 2026.
Conference logo

Authors

T. Zehle • M. Aßenmacher

Links

DOI

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: ZA26

Back to Top