ENTITY
SA-DPO
SA-DPO
PulseAugur coverage of SA-DPO — every cluster mentioning SA-DPO across labs, papers, and developer communities, ranked by signal.
Total · 30d
2
2 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
2
2 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 2 TOTAL
-
Researchers propose structure-aware consistency for LLM preference learning
Researchers have identified a theoretical inconsistency in popular preference learning methods like Direct Preference Optimization (DPO) used for aligning Large Language Models (LLMs). The study proposes a new framework…
-
Researchers refine preference optimization for LLMs with new methods
Researchers have introduced RMiPO, a new framework for offline preference optimization that uses intrinsic response-level mutual information to dynamically adjust preference contributions. This method aims to improve La…