New research advances federated learning for privacy and heterogeneity
ByPulseAugur Editorial·
Summary by gemini-2.5-flash-lite
from 20 sources
Researchers are developing new methods to improve federated learning, a technique that allows models to train on decentralized data without compromising privacy. Several papers introduce novel algorithms for handling data heterogeneity, such as FedForest for random forests and VARS-FL for client selection in IoT systems. Other work focuses on privacy-preserving inference through consensus embeddings and robust methods for federated graph neural networks. Additionally, new theoretical frameworks are being explored to bound generalization errors and incentivize client contributions in federated settings.
AI
Federated learning enables collaborative model training across distributed clients, yet vanilla FL exposes client updates to the central server. Secure-aggregation schemes protect privacy against an honest-but-curious server, but existing approaches often suffer from many communi…
Performance evaluation is essential for assessing the quality of machine learning (ML) models and guiding deployment decisions. In federated learning (FL), assessing the performance is challenging because data are distributed across participants. Consequently, the coordinator mus…
arXiv:2602.00407v2 Announce Type: replace Abstract: Federated Graph Neural Networks (FedGNNs) facilitate collaborative learning across multiple clients with graph-structured data while preserving user privacy. However, emerging research indicates that within this setting, shared …
arXiv:2605.05718v1 Announce Type: new Abstract: Cooperative inference across independently deployed machine learning models is increasingly desirable in distributed environments, as there is a growing need to leverage multiple models while keeping their data and model parameters …
arXiv:2605.05896v1 Announce Type: new Abstract: Federated learning (FL) systems typically employ stateless client selection, treating each communication round independently and ignoring accumulated evidence of client contribution quality. Under non-IID data, this leads to slow co…
arXiv:2505.08125v4 Announce Type: replace-cross Abstract: Federated Learning has gained traction in privacy-sensitive collaborative environments, with local SGD emerging as a key optimization method in decentralized settings. While its convergence properties are well-studied, asy…
arXiv:2602.03258v2 Announce Type: replace-cross Abstract: Random Forests (RF) are among the most powerful and widely used predictive models for centralized tabular data, yet few methods exist to adapt them to the federated learning setting. Unlike most federated learning approach…
Federated learning (FL) systems typically employ stateless client selection, treating each communication round independently and ignoring accumulated evidence of client contribution quality. Under non-IID data, this leads to slow convergence and unstable training, particularly wh…
arXiv:2605.04827v1 Announce Type: new Abstract: Label Distribution Learning (LDL) models supervision as an instance-wise probability distribution, enabling fine-grained learning under inherent ambiguity, but its success relies on high-fidelity label distributions that are costly …
arXiv:2605.04747v1 Announce Type: new Abstract: We introduce Knowledge-Free Correlated Agreement (KFCA) to reward client contributions in federated learning (FL) without relying on ground truth, a public test set, or distribution knowledge. Under categorical reports and an honest…
We introduce Knowledge-Free Correlated Agreement (KFCA) to reward client contributions in federated learning (FL) without relying on ground truth, a public test set, or distribution knowledge. Under categorical reports and an honest majority, KFCA is strictly truthful, addressing…
arXiv:2605.02372v1 Announce Type: cross Abstract: The growing development of artificial intelligence based solutions, together with privacy legislation, has driven the rise of the so-called privacy preserving machine learning architectures, such as federated learning. While feder…
arXiv cs.LG
TIER_1·Dario Filatrella, Ragnar Thobaben, Mikael Skoglund·
arXiv:2605.03499v1 Announce Type: new Abstract: We study expected generalization bounds for the Hierarchical Federated Learning (HFL) setup using Wasserstein distance. We introduce a generalized framework in which data is sampled hierarchically, and we model it with a multi-layer…
We study expected generalization bounds for the Hierarchical Federated Learning (HFL) setup using Wasserstein distance. We introduce a generalized framework in which data is sampled hierarchically, and we model it with a multi-layered tree structure that induces dependencies amon…
We study expected generalization bounds for the Hierarchical Federated Learning (HFL) setup using Wasserstein distance. We introduce a generalized framework in which data is sampled hierarchically, and we model it with a multi-layered tree structure that induces dependencies amon…
arXiv:2511.12340v2 Announce Type: replace Abstract: Efficient machine learning deployment requires models that account for hardware constraints. Because binary logic gates are the fundamental primitives of digital hardware, models built directly from logic operations offer a prom…
Federated Learning (FL) enables decentralised model training across distributed clients without requiring data centralisation. However, the generalisation performance of the global model is usually degraded by data heterogeneity across clients, particularly under limited data ava…
Over-the-air federated learning (OTA-FL) reduces uplink latency by exploiting waveform superposition, but conventional analog aggregation schemes typically require instantaneous channel state information (CSI), channel inversion, and coherent phase alignment, which can be difficu…