New federated learning methods tackle data heterogeneity and scalability challenges
ByPulseAugur Editorial·
Summary by gemini-2.5-flash-lite
from 19 sources
Researchers have developed several new methods to improve federated learning, a distributed machine learning approach that trains models on decentralized data without sharing raw information. FedHarmony addresses challenges in modeling label correlations across heterogeneous client data by introducing a consensus mechanism. "Who Trains Matters" tackles selection biases in federated learning by proposing an inverse-probability-weighted aggregation scheme to ensure training representativeness. Additionally, new techniques like Subspace Optimization (SSF), FedSLoP, and GradsSharding aim to enhance efficiency by reducing communication and memory overhead, particularly for large models on serverless platforms.
AI
arXiv:2604.28024v1 Announce Type: new Abstract: Federated Multi-Label Learning is a distributed paradigm where multiple clients possess heterogeneous multi-label data and perform collaborative learning under privacy constraints without sharing raw data. However, modeling label co…
Federated Multi-Label Learning is a distributed paradigm where multiple clients possess heterogeneous multi-label data and perform collaborative learning under privacy constraints without sharing raw data. However, modeling label correlations under heterogeneous distributions rem…
arXiv:2604.26604v1 Announce Type: new Abstract: Federated learning (FL) trains a shared model from updates contributed by distributed clients, often implicitly assuming that contributing clients are representative of the target population. In practice, this representativeness ass…
Federated learning (FL) trains a shared model from updates contributed by distributed clients, often implicitly assuming that contributing clients are representative of the target population. In practice, this representativeness assumption can fail at two distinct stages, inducin…
arXiv:2604.25467v1 Announce Type: new Abstract: Federated learning increasingly operates in a large-model regime where communication, memory, and computation are all scarce. Typically, non-IID client data induce drift that degrades the stability and performance of local training.…
Federated learning increasingly operates in a large-model regime where communication, memory, and computation are all scarce. Typically, non-IID client data induce drift that degrades the stability and performance of local training. Existing remedies such as SCAFFOLD introduce he…
arXiv:2604.24012v1 Announce Type: new Abstract: Federated learning enables a population of clients to collaboratively train machine learning models without exchanging their raw data, but standard algorithms such as FedAvg suffer from slow convergence and high communication and me…
arXiv cs.LG
TIER_1·Taehwan Yoon, Bongjun Choi, Wesley De Neve·
arXiv:2506.23210v5 Announce Type: replace Abstract: Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy. However, data and system heterogeneity often cause catastrophic forgetting and unbounded drift in model updat…
arXiv:2604.22072v1 Announce Type: cross Abstract: Federated learning (FL) aggregation on serverless platforms faces a hard scalability ceiling: existing architectures (lambda-FL, LIFL) partition clients across aggregators, but every aggregator must hold the complete model gradien…
Federated learning (FL) aggregation on serverless platforms faces a hard scalability ceiling: existing architectures (lambda-FL, LIFL) partition clients across aggregators, but every aggregator must hold the complete model gradient in memory. When gradients exceed the per-functio…
We consider what we refer to as {Decision-Focused Federated Learning (DFFL)} framework, i.e., a predict-then-optimize approach employed by a collection of agents, where each agent's predictive model is an input to a downstream linear optimization problem, and no direct exchange o…
arXiv:2604.27510v1 Announce Type: cross Abstract: Federated Learning (FL) enables collaborative model training across distributed clients without sharing raw data, yet its performance deteriorates under statistical heterogeneity. Clustered Federated Learning addresses this challe…
Federated Learning (FL) enables collaborative model training across distributed clients without sharing raw data, yet its performance deteriorates under statistical heterogeneity. Clustered Federated Learning addresses this challenge by grouping similar clients and training separ…
arXiv:2604.26116v1 Announce Type: new Abstract: Federated learning is a machine learning paradigm in which multiple devices collaboratively train a model under the supervision of a central server while ensuring data privacy. However, its performance is often hindered by redundant…
We consider what we refer to as {Decision-Focused Federated Learning (DFFL)} framework, i.e., a predict-then-optimize approach employed by a collection of agents, where each agent's predictive model is an input to a downstream linear optimization problem, and no direct exchange o…
Federated prognostics enable clients (e.g., companies, factories, and production lines) to collaboratively develop a failure time prediction model while keeping each client's data local and confidential. However, traditional federated models often assume homogeneity in the degrad…
<p>This episode is a follow up to our recent Fully Connected <a href="https://practicalai.fm/153">show discussing federated learning</a>. In that previous discussion, we mentioned <a href="https://flower.dev/">Flower</a> (a “friendly” federated learning framework). Well, one of t…
<p>Federated learning is increasingly practical for machine learning developers because of the challenges we face with model and data privacy. In this fully connected episode, Chris and Daniel dive into the topic and dissect the ideas behind federated learning, practicalities of …
Федеративное обучение в условиях дефицита памяти на Edge-устройствах. Часть 2 Как обучить ML-модели на Edge-устройствах с памятью <256 МБ? Привет, Хабр! Я — Александр Лошкарев, инженер-программист, и это вторая часть материала о федеративном обучении. В https:// habr.com/ru/compa…