PulseAugur
LIVE 07:16:41
research · [20 sources] ·
0
research

New research advances federated learning for privacy and heterogeneity

Researchers are developing new methods to improve federated learning, a technique that allows models to train on decentralized data without compromising privacy. Several papers introduce novel algorithms for handling data heterogeneity, such as FedForest for random forests and VARS-FL for client selection in IoT systems. Other work focuses on privacy-preserving inference through consensus embeddings and robust methods for federated graph neural networks. Additionally, new theoretical frameworks are being explored to bound generalization errors and incentivize client contributions in federated settings. AI

Summary written by gemini-2.5-flash-lite from 20 sources. How we write summaries →

IMPACT Advances in federated learning methods promise more robust and private AI model training across decentralized datasets.

RANK_REASON Cluster consists of multiple academic papers on federated learning techniques and theory.

Read on arXiv cs.LG →

COVERAGE [20]

  1. arXiv cs.LG TIER_1 · Mete Ozay ·

    DisAgg: Distributed Aggregators for Efficient Secure Aggregation in Federated Learning

    Federated learning enables collaborative model training across distributed clients, yet vanilla FL exposes client updates to the central server. Secure-aggregation schemes protect privacy against an honest-but-curious server, but existing approaches often suffer from many communi…

  2. arXiv cs.LG TIER_1 · Christian Zirpins ·

    FLAM: Evaluating Model Performance with Aggregatable Measures in Federated Learning

    Performance evaluation is essential for assessing the quality of machine learning (ML) models and guiding deployment decisions. In federated learning (FL), assessing the performance is challenging because data are distributed across participants. Consequently, the coordinator mus…

  3. arXiv cs.LG TIER_1 · Suprim Nakarmi, Junggab Son, Yue Zhao, Zuobin Xiong ·

    Fed-Listing: Federated Label Distribution Inference in Graph Neural Networks

    arXiv:2602.00407v2 Announce Type: replace Abstract: Federated Graph Neural Networks (FedGNNs) facilitate collaborative learning across multiple clients with graph-structured data while preserving user privacy. However, emerging research indicates that within this setting, shared …

  4. arXiv cs.LG TIER_1 · Yui Hashimoto, Takayuki Nishio, Yuichi Kitagawa, Takahito Tanimura ·

    Enabling Federated Inference via Unsupervised Consensus Embedding

    arXiv:2605.05718v1 Announce Type: new Abstract: Cooperative inference across independently deployed machine learning models is increasingly desirable in distributed environments, as there is a growing need to leverage multiple models while keeping their data and model parameters …

  5. arXiv cs.LG TIER_1 · Mohamed Lakas, Mohamed Amine Ferrag ·

    VARS-FL: Validation-Aligned Client Selection for Non-IID Federated Learning in IoT Systems

    arXiv:2605.05896v1 Announce Type: new Abstract: Federated learning (FL) systems typically employ stateless client selection, treating each communication round independently and ignoring accumulated evidence of client contribution quality. Under non-IID data, this leads to slow co…

  6. arXiv cs.LG TIER_1 · Soham Bonnerjee, Sayar Karmakar, Wei Biao Wu ·

    Sharp Gaussian approximations for Decentralized Federated Learning

    arXiv:2505.08125v4 Announce Type: replace-cross Abstract: Federated Learning has gained traction in privacy-sensitive collaborative environments, with local SGD emerging as a key optimization method in decentralized settings. While its convergence properties are well-studied, asy…

  7. arXiv cs.LG TIER_1 · R\'emi Khellaf, Erwan Scornet, Aur\'elien Bellet, Julie Josse ·

    Principled Federated Random Forests for Heterogeneous Data

    arXiv:2602.03258v2 Announce Type: replace-cross Abstract: Random Forests (RF) are among the most powerful and widely used predictive models for centralized tabular data, yet few methods exist to adapt them to the federated learning setting. Unlike most federated learning approach…

  8. Hugging Face Daily Papers TIER_1 ·

    VARS-FL: Validation-Aligned Client Selection for Non-IID Federated Learning in IoT Systems

    Federated learning (FL) systems typically employ stateless client selection, treating each communication round independently and ignoring accumulated evidence of client contribution quality. Under non-IID data, this leads to slow convergence and unstable training, particularly wh…

  9. arXiv cs.LG TIER_1 · Junxiang Wu, Zhiqiang Kou, Hongwei Zeng, Wenke Huang, Biao Liu, Hanlin Gu, Yuheng Jia, Di Jiang, Yang Liu, Xin Geng, Qiang Yang ·

    Trustworthy Federated Label Distribution Learning under Annotation Quality Disparity

    arXiv:2605.04827v1 Announce Type: new Abstract: Label Distribution Learning (LDL) models supervision as an instance-wise probability distribution, enabling fine-grained learning under inherent ambiguity, but its success relies on high-fidelity label distributions that are costly …

  10. arXiv cs.LG TIER_1 · Leon Witt, Togrul Abbasli, Kentaroh Toyoda, Wojciech Samek, Lucy Klinger ·

    Knowledge-Free Correlated Agreement for Incentivizing Federated Learning

    arXiv:2605.04747v1 Announce Type: new Abstract: We introduce Knowledge-Free Correlated Agreement (KFCA) to reward client contributions in federated learning (FL) without relying on ground truth, a public test set, or distribution knowledge. Under categorical reports and an honest…

  11. arXiv cs.AI TIER_1 · Lucy Klinger ·

    Knowledge-Free Correlated Agreement for Incentivizing Federated Learning

    We introduce Knowledge-Free Correlated Agreement (KFCA) to reward client contributions in federated learning (FL) without relying on ground truth, a public test set, or distribution knowledge. Under categorical reports and an honest majority, KFCA is strictly truthful, addressing…

  12. arXiv cs.LG TIER_1 · Rickard Br\"annvall ·

    Client-Conditional Federated Learning via Local Training Data Statistics

    arXiv:2603.11307v2 Announce Type: replace Abstract: Federated learning (FL) under data heterogeneity remains challenging: existing methods either ignore client differences (FedAvg), require costly cluster discovery (IFCA), or maintain per-client models (Ditto). All degrade when d…

  13. arXiv cs.AI TIER_1 · Judith S\'ainz-Pardo D\'iaz, \'Alvaro L\'opez Garc\'ia ·

    Privacy Preserving Machine Learning Workflow: from Anonymization to Personalized Differential Privacy Budgets in Federated Learning

    arXiv:2605.02372v1 Announce Type: cross Abstract: The growing development of artificial intelligence based solutions, together with privacy legislation, has driven the rise of the so-called privacy preserving machine learning architectures, such as federated learning. While feder…

  14. arXiv cs.LG TIER_1 · Dario Filatrella, Ragnar Thobaben, Mikael Skoglund ·

    A Hierarchical Sampling Framework for bounding the Generalization Error of Federated Learning

    arXiv:2605.03499v1 Announce Type: new Abstract: We study expected generalization bounds for the Hierarchical Federated Learning (HFL) setup using Wasserstein distance. We introduce a generalized framework in which data is sampled hierarchically, and we model it with a multi-layer…

  15. arXiv cs.LG TIER_1 · Mikael Skoglund ·

    A Hierarchical Sampling Framework for bounding the Generalization Error of Federated Learning

    We study expected generalization bounds for the Hierarchical Federated Learning (HFL) setup using Wasserstein distance. We introduce a generalized framework in which data is sampled hierarchically, and we model it with a multi-layered tree structure that induces dependencies amon…

  16. Hugging Face Daily Papers TIER_1 ·

    A Hierarchical Sampling Framework for bounding the Generalization Error of Federated Learning

    We study expected generalization bounds for the Hierarchical Federated Learning (HFL) setup using Wasserstein distance. We introduce a generalized framework in which data is sampled hierarchically, and we model it with a multi-layered tree structure that induces dependencies amon…

  17. arXiv cs.LG TIER_1 · Katarzyna Fojcik, Renaldas Zioma, Jogundas Armaitis ·

    LILogic Net: Compact Logic Gate Networks with Learnable Connectivity for Efficient Hardware Deployment

    arXiv:2511.12340v2 Announce Type: replace Abstract: Efficient machine learning deployment requires models that account for hardware constraints. Because binary logic gates are the fundamental primitives of digital hardware, models built directly from logic operations offer a prom…

  18. arXiv stat.ML TIER_1 · Hao Chen, Zavareh Bozorgasl ·

    Resource-Element Energy Difference for Noncoherent Over-the-Air Federated Learning

    arXiv:2605.07263v1 Announce Type: cross Abstract: Over-the-air federated learning (OTA-FL) reduces uplink latency by exploiting waveform superposition, but conventional analog aggregation schemes typically require instantaneous channel state information (CSI), channel inversion, …

  19. arXiv cs.CV TIER_1 · Nicolas Pugeault ·

    Enhancing Federated Quadruplet Learning: Stochastic Client Selection and Embedding Stability Analysis

    Federated Learning (FL) enables decentralised model training across distributed clients without requiring data centralisation. However, the generalisation performance of the global model is usually degraded by data heterogeneity across clients, particularly under limited data ava…

  20. arXiv stat.ML TIER_1 · Zavareh Bozorgasl ·

    Resource-Element Energy Difference for Noncoherent Over-the-Air Federated Learning

    Over-the-air federated learning (OTA-FL) reduces uplink latency by exploiting waveform superposition, but conventional analog aggregation schemes typically require instantaneous channel state information (CSI), channel inversion, and coherent phase alignment, which can be difficu…