PulseAugur
LIVE 06:29:02
research · [3 sources] ·
0
research

ViPO dataset and Poly-DPO algorithm scale visual preference optimization

Researchers have introduced ViPO, a large-scale dataset designed to improve visual generative models through preference optimization. The dataset includes 1 million image pairs and 300,000 video pairs, addressing limitations of existing datasets such as low resolution and imbalanced distributions. They also developed Poly-DPO, an algorithm that enhances robustness against noisy preference data, demonstrating significant gains on existing datasets and superior performance when used with ViPO. AI

Summary written by gemini-2.5-flash-lite from 3 sources. How we write summaries →

IMPACT Enhances visual generation model quality by providing a large-scale, high-quality preference dataset and a robust optimization algorithm.

RANK_REASON Academic paper introducing a new dataset and optimization technique for visual generative models.

Read on arXiv cs.CV →

COVERAGE [3]

  1. Hugging Face Daily Papers TIER_1 ·

    ViPO: Visual Preference Optimization at Scale

    While preference optimization is crucial for improving visual generative models, how to effectively scale this paradigm remains largely unexplored. Current open-source preference datasets contain conflicting preference patterns, where winners excel in some dimensions but underper…

  2. arXiv cs.CV TIER_1 · Ming Li, Jie Wu, Justin Cui, Xiaojie Li, Rui Wang, Chen Chen ·

    ViPO: Visual Preference Optimization at Scale

    arXiv:2604.24953v1 Announce Type: new Abstract: While preference optimization is crucial for improving visual generative models, how to effectively scale this paradigm remains largely unexplored. Current open-source preference datasets contain conflicting preference patterns, whe…

  3. arXiv cs.CV TIER_1 · Chen Chen ·

    ViPO: Visual Preference Optimization at Scale

    While preference optimization is crucial for improving visual generative models, how to effectively scale this paradigm remains largely unexplored. Current open-source preference datasets contain conflicting preference patterns, where winners excel in some dimensions but underper…