Home  | Publications | HNH+25

Language Model Re-Rankers Are Fooled by Lexical Similarities

MCML Authors

Abstract

Language model (LM) re-rankers are used to refine retrieval results for retrieval-augmented generation (RAG). They are more expensive than lexical matching methods like BM25 but assumed to better process semantic information and the relations between the query and the retrieved answers. To understand whether LM re-rankers always live up to this assumption, we evaluate 6 different LM re-rankers on the NQ, LitQA2 and DRUID datasets. Our results show that LM re-rankers struggle to outperform a simple BM25 baseline on DRUID. Leveraging a novel separation metric based on BM25 scores, we explain and identify re-ranker errors stemming from lexical dissimilarities. We also investigate different methods to improve LM re-ranker performance and find these methods mainly useful for NQ. Taken together, our work identifies and explains weaknesses of LM re-rankers and points to the need for more adversarial and realistic datasets for their evaluation.

inproceedings


FEVER @ACL 2025

8th Fact Extraction and VERification Workshop at the 63rd Annual Meeting of the Association for Computational Linguistics. Vienna, Austria, Jul 27-Aug 01, 2025.

Authors

L. Hagström • E. Nie • R. Halifa • H. Schmid • R. Johansson • A. Junge

Links

URL

Research Area

 B2 | Natural Language Processing

BibTeXKey: HNH+25

Back to Top