PulseAugur
LIVE 13:49:51
research · [1 source] ·
0
research

Curiosity-Critic reward improves world model training accuracy

Researchers have introduced a new intrinsic reward mechanism called Curiosity-Critic for training world models. This method focuses on the improvement of cumulative prediction error rather than just local errors, offering a more effective way to guide exploration. The system uses a learned critic to estimate an asymptotic error baseline, allowing it to differentiate between reducible and irreducible prediction errors. Experiments demonstrated that Curiosity-Critic surpasses existing methods in convergence speed and world model accuracy. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

RANK_REASON The submission is an academic paper on arXiv detailing a new method for training world models.

Read on arXiv stat.ML →

Curiosity-Critic reward improves world model training accuracy

COVERAGE [1]

  1. arXiv stat.ML TIER_1 · Haicheng Wang ·

    Curiosity-Critic: Cumulative Prediction Error Improvement as a Tractable Intrinsic Reward for World Model Training

    Local prediction-error-based curiosity rewards focus on the current transition without considering the world model's cumulative prediction error across all visited transitions. We introduce Curiosity-Critic, which grounds its intrinsic reward in the improvement of this cumulative…