PulseAugur
LIVE 01:46:58
research · [1 source] ·
0
research

New Rose optimizer offers low VRAM, fast convergence, and great results

A new PyTorch optimizer named Rose has been released under the Apache 2.0 license. Developed by Matthew K., Rose is designed to be stateless, offering significantly lower VRAM usage compared to optimizers like AdamW, with memory overhead comparable to plain SGD. Early benchmarks suggest it achieves fast convergence and excellent generalization, even outperforming AdamW on certain tasks and demonstrating competitive results on OpenAI's parameter-golf challenge. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Offers a low-VRAM alternative for model training, potentially enabling larger models on consumer hardware.

RANK_REASON Release of a new open-source optimizer with benchmark results and code.

Read on r/MachineLearning →

COVERAGE [1]

  1. r/MachineLearning TIER_1 · /u/ECF630 ·

    [New Optimizer] 🌹 Rose: low VRAM, easy to use, great results, Apache 2.0 [P]

    <!-- SC_OFF --><div class="md"><p>Hello, World! I recently released a new PyTorch optimizer I've been researching and developing on my own for the last couple of years. It's named &quot;Rose&quot; in memory of my mother, who loved to hear about my discoveries and progress with AI…