PulseAugur
LIVE 16:59:40
research · [2 sources] ·
0
research

FedPuReL method preserves foundation model balance in long-tailed federated learning

Researchers have developed a new method called FedPuReL to address challenges in personalized federated learning, particularly when dealing with long-tailed and non-IID data distributions. The proposed approach purifies local gradients using zero-shot predictions to maintain a balanced global model and treats personalization as a residual correction. Experiments show FedPuReL outperforms existing methods in both global and personalized model performance across various long-tailed scenarios. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Introduces a novel approach to improve the robustness and fairness of personalized federated learning models in real-world, imbalanced datasets.

RANK_REASON This is a research paper published on arXiv detailing a new method for federated learning.

Read on arXiv cs.CV →

COVERAGE [2]

  1. arXiv cs.CV TIER_1 · Shihao Hou, Chikai Shang, Zhiheng Yang, Jiacheng Yang, Xinyi Shang, Junlong Gao, Yiqun Zhang, Yang Lu ·

    Fine-Tuning Impairs the Balancedness of Foundation Models in Long-tailed Personalized Federated Learning

    arXiv:2605.02247v1 Announce Type: new Abstract: Personalized federated learning (PFL) with foundation models has emerged as a promising paradigm enabling clients to adapt to heterogeneous data distributions. However, real-world scenarios often face the co-occurrence of non-IID da…

  2. arXiv cs.CV TIER_1 · Yang Lu ·

    Fine-Tuning Impairs the Balancedness of Foundation Models in Long-tailed Personalized Federated Learning

    Personalized federated learning (PFL) with foundation models has emerged as a promising paradigm enabling clients to adapt to heterogeneous data distributions. However, real-world scenarios often face the co-occurrence of non-IID data and long-tailed class distributions, presenti…