PulseAugur
LIVE 13:09:21
research · [2 sources] ·
0
research

New benchmark tests LLMs' ability to recover helpfulness after user clarifies intent

Researchers have introduced CarryOnBench, a new benchmark designed to evaluate how well large language models can recover helpfulness in multi-turn conversations after a user clarifies their intent. The benchmark simulates over 5,900 conversations across 14 models, revealing that many models initially withhold information due to misinterpretation rather than lack of knowledge. While most models improve with clarification, some exhibit failure modes like utility lock-in or unsafe recovery, which are missed by single-turn evaluations. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Highlights a critical gap in LLM safety evaluations, suggesting current methods may overlook models that are unresponsive to user clarification.

RANK_REASON Academic paper introducing a new benchmark for LLM safety and utility.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.AI TIER_1 · Mingqian Zheng, Malia Morgan, Liwei Jiang, Carolyn Rose, Maarten Sap ·

    Useless but Safe? Benchmarking Utility Recovery with User Intent Clarification in Multi-Turn Conversations

    arXiv:2604.27093v1 Announce Type: cross Abstract: Current LLM safety alignment techniques improve model robustness against adversarial attacks, but overlook whether and how LLMs can recover helpfulness when benign users clarify their intent. We introduce CarryOnBench, the first i…

  2. arXiv cs.CL TIER_1 · Maarten Sap ·

    Useless but Safe? Benchmarking Utility Recovery with User Intent Clarification in Multi-Turn Conversations

    Current LLM safety alignment techniques improve model robustness against adversarial attacks, but overlook whether and how LLMs can recover helpfulness when benign users clarify their intent. We introduce CarryOnBench, the first interactive benchmark that measures whether LLMs ca…