PulseAugur
LIVE 10:29:21
research · [1 source] ·
0
research

New RbtAct method uses rebuttals to train LLMs for actionable scientific review feedback

Researchers have developed a new method called RbtAct to improve the actionability of feedback generated by large language models for scientific peer reviews. This approach leverages existing peer review rebuttals as implicit supervision, learning which reviewer comments led to concrete revisions. A new dataset, RMR-75K, was created to map review segments to their corresponding rebuttal segments, enabling the training of models like Llama-3.1-8B-Instruct for more specific and implementable guidance. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Enhances AI's ability to provide actionable feedback in scientific peer review, potentially improving research quality.

RANK_REASON This is a research paper introducing a new method and dataset for improving AI-generated peer review feedback.

Read on arXiv cs.CL →

COVERAGE [1]

  1. arXiv cs.CL TIER_1 · Sihong Wu, Yiling Ma, Yilun Zhao, Tiansheng Hu, Owen Jiang, Manasi Patwardhan, Arman Cohan ·

    RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation

    arXiv:2603.09723v2 Announce Type: replace Abstract: Large language models (LLMs) are increasingly used across the scientific workflow, including to draft peer-review reports. However, many AI-generated reviews are superficial and insufficiently actionable, leaving authors without…