PulseAugur
LIVE 12:24:04
tool · [1 source] ·
0
tool

JD.com's AGPO enhances LLM reasoning and search ads with asymmetric policy optimization

Researchers have introduced Asymmetric Group Policy Optimization (AGPO), a novel reinforcement learning technique designed to improve the reasoning capabilities of large language models. AGPO aims to prevent the narrowing of reasoning patterns often seen in current methods by suppressing incorrect paths and focusing on rare, correct ones. Experiments on mathematical benchmarks show AGPO achieves state-of-the-art accuracy and improves performance at scale. The method has also been applied to optimize search ads relevance at JD, leading to significant gains in downstream models. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT This new optimization technique could enhance LLM reasoning accuracy and efficiency, potentially improving applications in areas like search relevance.

RANK_REASON This is a research paper detailing a new method for improving LLM reasoning. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.AI →

COVERAGE [1]

  1. arXiv cs.AI TIER_1 · Yang Xu, Kun Yao, Yiming Deng, Zheng Fang, Kai Ming Ting, Ming Pang ·

    AGPO: Asymmetric Group Policy Optimization for Verifiable Reasoning and Search Ads Relevance at JD

    arXiv:2605.05826v1 Announce Type: new Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) has demonstrated notable success in enhancing the reasoning performance of large language models (LLMs). However, recent studies reveal that while current RLVR methods improve sa…