PulseAugur
LIVE 12:25:09
research · [4 sources] ·
0
research

LLMs struggle with advice and relation completion, showing 'recognition without authorization'

Researchers have developed a new framework called RC-RAG to improve the ability of large language models (LLMs) to complete relations, especially in cases with sparse data. This method integrates paraphrases of relations at multiple stages of retrieval and generation, enhancing lexical coverage and reasoning without requiring model fine-tuning. Experiments showed significant improvements over existing retrieval-augmented generation baselines, particularly in long-tail scenarios. Separately, a study analyzing LLMs' advice on relationship issues found that while models can identify similar dynamics to human commenters, they are less likely to provide directive authorization for action, especially in sensitive cases involving abuse or safety threats. This pattern, termed "recognition without authorization," highlights a structural divergence in LLM advisory styles, potentially influenced by safety alignment and training data. AI

Summary written by gemini-2.5-flash-lite from 4 sources. How we write summaries →

IMPACT New research explores methods to improve LLM relation completion and analyzes the limitations of LLMs in providing directive advice for interpersonal dilemmas.

RANK_REASON Two distinct academic papers published on arXiv detailing new methods and analyses related to LLMs.

Read on arXiv cs.CL →

COVERAGE [4]

  1. arXiv cs.CL TIER_1 · Fahmida Alam, Mihai Surdeanu, Ellen Riloff ·

    Bridging the Long-Tail Gap: Robust Retrieval-Augmented Relation Completion via Multi-Stage Paraphrase Infusion

    arXiv:2604.22261v1 Announce Type: new Abstract: Large language models (LLMs) struggle with relation completion (RC), both with and without retrieval-augmented generation (RAG), particularly when the required information is rare or sparsely represented. To address this, we propose…

  2. arXiv cs.CL TIER_1 · Tom van Nuenen ·

    Recognition Without Authorization: LLMs and the Moral Order of Online Advice

    arXiv:2604.22143v1 Announce Type: cross Abstract: Large language models are increasingly used to mediate everyday interpersonal dilemmas, yet how their advisory defaults interact with the concentrated moral orders of specific communities remains poorly understood. This article co…

  3. arXiv cs.CL TIER_1 · Ellen Riloff ·

    Bridging the Long-Tail Gap: Robust Retrieval-Augmented Relation Completion via Multi-Stage Paraphrase Infusion

    Large language models (LLMs) struggle with relation completion (RC), both with and without retrieval-augmented generation (RAG), particularly when the required information is rare or sparsely represented. To address this, we propose a novel multi-stage paraphrase-guided relation-…

  4. arXiv cs.CL TIER_1 · Tom van Nuenen ·

    Recognition Without Authorization: LLMs and the Moral Order of Online Advice

    Large language models are increasingly used to mediate everyday interpersonal dilemmas, yet how their advisory defaults interact with the concentrated moral orders of specific communities remains poorly understood. This article compares four assistant-style LLMs with community-en…