PulseAugur
LIVE 15:29:03
tool · [1 source] ·
0
tool

When2Speak dataset trains LLMs for better turn-taking in multi-party conversations

Researchers have introduced When2Speak, a new dataset and generation pipeline designed to improve how large language models handle turn-taking in multi-party conversations. The dataset contains over 215,000 examples from 16,000 conversations, focusing on the decision of when to speak versus remain silent. Supervised fine-tuning on this data significantly boosts model performance, but further improvements are achieved using reinforcement learning with asymmetric reward shaping, which reduces missed interventions and increases recall. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Enables LLMs to participate more naturally in multi-party conversations by learning appropriate turn-taking.

RANK_REASON This is a research paper introducing a new dataset and methodology for LLMs. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.CL →

COVERAGE [1]

  1. arXiv cs.CL TIER_1 · Vihaan Nama, Shreya Mendi, Zian Ye, Brinnae Bent ·

    When2Speak: A Dataset for Temporal Participation and Turn-Taking in Multi-Party Conversations for Large Language Models

    arXiv:2605.05626v1 Announce Type: new Abstract: Large Language Models (LLMs) excel at generating contextually appropriate responses but remain poorly calibrated for multi-party conversations, where deciding when to speak is as critical as what to say. In such settings, naively re…