Home  | Publications | LNH+26

Dual-Layer Prompt Ensembles: Leveraging System- And User-Level Instructions for Robust LLM-Based Query Expansion and Rank Fusion

MCML Authors

Abstract

Large Language Models (LLMs) show strong potential for query expansion (QE), but their effectiveness is highly sensitive to prompt design. This paper investigates whether exploiting the system-user prompt distinction in chat-based LLMs improves QE, and how multiple expansions should be combined. We propose Dual-Layer Prompt Ensembles, which pair a behavioural system prompt with varied user prompts to generate diverse expansions, and aggregate their BM25-ranked lists using lightweight SU-RankFusion schemes. Experiments on six heterogeneous datasets show that dual-layer prompting consistently outperforms strong single-prompt baselines. For example, on Touche-2020 a dual-layer configuration improves nDCG@10 from 0.4177 (QE-CoT) to 0.4696, and SU-RankFusion further raises it to 0.4797. On Robust04 and DBPedia, SU-RankFusion improves nDCG@10 over BM25 by 24.7% and 25.5%, respectively, with similar gains on NFCorpus, FiQA, and TREC-COVID. These results demonstrate that system-user prompt ensembles are effective for QE, and that simple fusion transforms prompt-level diversity into stable retrieval improvements.

article LNH+26


Information Fusion

131.104160. Jul. 2026.
Top Journal

Authors

M. Li • E. Nie • H. Huang • X. Lv • G. Zhou

Links

DOI

Research Area

 B2 | Natural Language Processing

BibTeXKey: LNH+26

Back to Top