PulseAugur
LIVE 06:32:43
research · [4 sources] ·
0
research

A Reproducibility Study of LLM-Based Query Reformulation

Two new research papers explore the application and efficiency of Large Language Models (LLMs) in information retrieval. The first paper, a reproducibility study, evaluates ten LLM-based query reformulation methods across various retrieval paradigms and LLM sizes, finding that gains are highly dependent on the retrieval method and that larger models do not always perform better. The second paper introduces ResRank, a unified framework that compresses passages into single embeddings for efficient listwise reranking, addressing bottlenecks in latency and quality degradation associated with feeding full texts into LLMs. AI

Summary written by None from 4 sources. How we write summaries →

IMPACT These studies highlight the need for careful evaluation of LLM effectiveness in retrieval and introduce methods for more efficient LLM-based reranking.

RANK_REASON The cluster contains two academic papers discussing LLM applications in information retrieval and proposing new frameworks.

Read on arXiv cs.CL →

COVERAGE [4]

  1. arXiv cs.CL TIER_1 · Amin Bigdeli, Radin Hamidi Rad, Hai Son Le, Mert Incesu, Negar Arabzadeh, Charles L. A. Clarke, Ebrahim Bagheri ·

    A Reproducibility Study of LLM-Based Query Reformulation

    arXiv:2604.27421v1 Announce Type: cross Abstract: Large Language Models (LLMs) are now widely used for query reformulation and expansion in Information Retrieval, with many studies reporting substantial effectiveness gains. However, these results are typically obtained under hete…

  2. arXiv cs.CL TIER_1 · Ebrahim Bagheri ·

    A Reproducibility Study of LLM-Based Query Reformulation

    Large Language Models (LLMs) are now widely used for query reformulation and expansion in Information Retrieval, with many studies reporting substantial effectiveness gains. However, these results are typically obtained under heterogeneous experimental conditions, making it diffi…

  3. arXiv cs.AI TIER_1 · Xiaojie Ke, Shuai Zhang, Liansheng Sun, Yongjin Wang, Hengjun Jiang, Xiangkun Liu, Cunxin Gu, Jian Xu, Guanjun Jiang ·

    ResRank: Unifying Retrieval and Listwise Reranking via End-to-End Joint Training with Residual Passage Compression

    arXiv:2604.22180v1 Announce Type: cross Abstract: Large language model (LLM) based listwise reranking has emerged as the dominant paradigm for achieving state-of-the-art ranking effectiveness in information retrieval. However, its reliance on feeding full passage texts into the L…

  4. arXiv cs.AI TIER_1 · Guanjun Jiang ·

    ResRank: Unifying Retrieval and Listwise Reranking via End-to-End Joint Training with Residual Passage Compression

    Large language model (LLM) based listwise reranking has emerged as the dominant paradigm for achieving state-of-the-art ranking effectiveness in information retrieval. However, its reliance on feeding full passage texts into the LLM introduces two critical bottlenecks: the "lost …