PulseAugur
LIVE 08:46:19
research · [4 sources] ·
0
research

New theories explore spectral dynamics in deep neural network training

Two new arXiv papers explore the spectral dynamics of deep neural networks during training. One paper introduces "Neural Low-Degree Filtering" (Neural LoFi) as a theoretical framework to understand hierarchical feature learning as an iterative spectral procedure. The other paper uses a dynamical mean-field theory to analyze how hidden-weight spectra evolve, predicting outlier behavior and hyperparameter transfer in wide networks. AI

Summary written by gemini-2.5-flash-lite from 4 sources. How we write summaries →

IMPACT These theoretical frameworks offer new perspectives on how deep neural networks learn, potentially guiding future model development and analysis.

RANK_REASON Two academic papers published on arXiv presenting theoretical frameworks for understanding deep learning.

Read on arXiv cs.AI →

COVERAGE [4]

  1. arXiv cs.LG TIER_1 · Florent Krzakala ·

    Deep Learning as Neural Low-Degree Filtering: A Spectral Theory of Hierarchical Feature Learning

    Understanding how deep neural networks learn useful internal representations from data remains a central open problem in the theory of deep learning. We introduce Neural Low-Degree Filtering (Neural LoFi), a stylized limit of gradient-based training in which hierarchical feature …

  2. arXiv cs.AI TIER_1 · Blake Bordelon ·

    Spectral Dynamics in Deep Networks: Feature Learning, Outlier Escape, and Learning Rate Transfer

    We study the evolution of hidden-weight spectra in wide neural networks trained by (stochastic) gradient descent. We develop a two-level dynamical mean-field theory (DMFT) that jointly tracks bulk and outlier spectral dynamics for spiked ensembles whose spike directions remain st…

  3. arXiv stat.ML TIER_1 · Yatin Dandi, Matteo Vilucchio, Luca Arnaboldi, Hugo Tabanelli, Florent Krzakala ·

    Deep Learning as Neural Low-Degree Filtering: A Spectral Theory of Hierarchical Feature Learning

    arXiv:2605.13612v1 Announce Type: cross Abstract: Understanding how deep neural networks learn useful internal representations from data remains a central open problem in the theory of deep learning. We introduce Neural Low-Degree Filtering (Neural LoFi), a stylized limit of grad…

  4. arXiv stat.ML TIER_1 · Clarissa Lauditi, Cengiz Pehlevan, Blake Bordelon ·

    Spectral Dynamics in Deep Networks: Feature Learning, Outlier Escape, and Learning Rate Transfer

    arXiv:2605.07870v1 Announce Type: cross Abstract: We study the evolution of hidden-weight spectra in wide neural networks trained by (stochastic) gradient descent. We develop a two-level dynamical mean-field theory (DMFT) that jointly tracks bulk and outlier spectral dynamics for…