PulseAugur
LIVE 12:24:11
research · [1 source] ·
0
research

New algorithms offer efficient finite initialization for tensorized neural networks

Researchers have developed novel algorithms for initializing layers in tensorized neural networks and tensor network algorithms. These methods utilize partial computations of Frobenius norms and positive lineal entrywise sums to manage potential divergence or zero norms during initialization. The approach has demonstrated effectiveness when applied to Matrix Product State/Tensor Train and Matrix Product Operator/Tensor Train Matrix layers, showing scalability with respect to network size and dimensions. All associated code has been made publicly available. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Introduces new initialization techniques for tensorized networks, potentially improving training stability and efficiency for specific model architectures.

RANK_REASON This is a research paper detailing novel algorithms for tensorized neural networks.

Read on arXiv cs.LG →

COVERAGE [1]

  1. arXiv cs.LG TIER_1 · Alejandro Mata Ali, I\~nigo Perez Delgado, Marina Ristol Roura, Aitor Moreno Fdez. de Leceta ·

    Efficient Finite Initialization with Partial Norms for Tensorized Neural Networks and Tensor Networks Algorithms

    arXiv:2309.06577v5 Announce Type: replace Abstract: We present two algorithms to initialize layers of tensorized neural networks and general tensor network algorithms using partial computations of their Frobenius norms and positive lineal entrywise sums, depending on the type of …