PulseAugur
LIVE 08:06:46
research · [2 sources] ·
0
research

Nora optimizer achieves efficiency, stability, and speed for large-scale LLM training

Researchers have introduced Nora, a novel matrix-based optimizer designed for efficient and stable training of large language models. Nora aims to unify efficiency, stability, and speed, addressing limitations of existing methods like Muon and RMNP. The optimizer stabilizes weight norms and angular velocities, approximates structured preconditioning, and achieves a computational complexity of O(mn), with a simple two-line implementation. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Introduces a new optimization technique that could accelerate large-scale LLM training and improve stability.

RANK_REASON The cluster contains an arXiv preprint detailing a new optimization method for LLM training.

Read on arXiv cs.LG →

COVERAGE [2]

  1. arXiv cs.LG TIER_1 · Jinghui Yuan, Jiaxuan Zou, Shuo Wang, Yong Liu, Feiping Nie ·

    Nora: Normalized Orthogonal Row Alignment for Scalable Matrix Optimizer

    arXiv:2605.03769v1 Announce Type: new Abstract: Matrix-based optimizers have demonstrated immense potential in training Large Language Models (LLMs), however, designing an ideal optimizer remains a formidable challenge. A superior optimizer must satisfy three core desiderata: eff…

  2. arXiv cs.LG TIER_1 · Feiping Nie ·

    Nora: Normalized Orthogonal Row Alignment for Scalable Matrix Optimizer

    Matrix-based optimizers have demonstrated immense potential in training Large Language Models (LLMs), however, designing an ideal optimizer remains a formidable challenge. A superior optimizer must satisfy three core desiderata: efficiency, achieving Muon-like preconditioning to …