PulseAugur
LIVE 12:24:38
research · [6 sources] ·
0
research

New research explores advanced algorithms for robust and efficient decentralized machine learning

Researchers have developed several new algorithms for decentralized learning, focusing on improving efficiency and robustness. One approach, AsylADMM, addresses non-smooth optimization problems in memory-constrained edge devices by requiring only two variables per node. Another study demonstrates that row-stochastic matrices can outperform doubly stochastic matrices in decentralized learning by providing tighter convergence rates. Additionally, new theoretical frameworks are emerging for decentralized stochastic gradient descent under Markov chain sampling and for high-probability convergence guarantees in decentralized stochastic optimization with gradient tracking. AI

Summary written by gemini-2.5-flash-lite from 6 sources. How we write summaries →

IMPACT Advances in decentralized learning algorithms could enable more efficient and robust training on distributed or edge computing systems.

RANK_REASON Multiple arXiv papers presenting new algorithms and theoretical analyses for decentralized learning.

Read on arXiv stat.ML →

COVERAGE [6]

  1. arXiv cs.LG TIER_1 · Anna van Elst, Igor Colin, Stephan Cl\'emen\c{c}on ·

    Fast and Efficient Gossip Algorithms for Robust and Non-smooth Decentralized Learning

    arXiv:2601.20571v2 Announce Type: replace Abstract: Decentralized learning on resource-constrained edge devices demands algorithms that are communication-efficient, robust to data corruption, and lightweight in memory. State-of-the-art gossip-based methods address communication e…

  2. arXiv cs.LG TIER_1 · Bing Liu, Boao Kong, Limin Lu, Kun Yuan, Chengcheng Zhao ·

    Row-stochastic matrices can provably outperform doubly stochastic matrices in decentralized learning

    arXiv:2511.19513v2 Announce Type: replace Abstract: Decentralized learning often involves a weighted global loss with heterogeneous node weights $\lambda$. We revisit two natural strategies for incorporating these weights: (i) embedding them into the local losses to retain a unif…

  3. arXiv cs.LG TIER_1 · Jiahuan Wang, Ziqing Wen, Ping Luo, Dongsheng Li, Tao Sun ·

    Stability and Generalization for Decentralized Markov SGD

    arXiv:2605.01701v1 Announce Type: new Abstract: Stochastic gradient methods are central to large-scale learning, yet their generalization theory typically relies on independent sampling assumptions. In many practical applications, data are generated by Markov chains and learning …

  4. arXiv cs.LG TIER_1 · Aleksandar Armacki, Haoyuan Cai, Ali H. Sayed ·

    High-Probability Convergence in Decentralized Stochastic Optimization with Gradient Tracking

    arXiv:2605.00281v1 Announce Type: new Abstract: We study high-probability (HP) convergence guarantees in decentralized stochastic optimization, where multiple agents collaborate to jointly train a model over a network. Existing HP results in decentralized settings almost exclusiv…

  5. arXiv cs.LG TIER_1 · Mohammad Rafiqul Islam, Lingjiong Zhu ·

    Decentralized Proximal Stochastic Gradient Langevin Dynamics

    arXiv:2605.00723v1 Announce Type: cross Abstract: We propose Decentralized Proximal Stochastic Gradient Langevin Dynamics (DE-PSGLD), a decentralized Markov chain Monte Carlo (MCMC) algorithm for sampling from a log-concave probability distribution constrained to a convex domain.…

  6. arXiv stat.ML TIER_1 · Lingjiong Zhu ·

    Decentralized Proximal Stochastic Gradient Langevin Dynamics

    We propose Decentralized Proximal Stochastic Gradient Langevin Dynamics (DE-PSGLD), a decentralized Markov chain Monte Carlo (MCMC) algorithm for sampling from a log-concave probability distribution constrained to a convex domain. Constraints are enforced through a shared proxima…