PulseAugur
LIVE 07:09:47
tool · [1 source] ·
0
tool

Federated learning models risk cross-client data memorization, study finds

A new research paper explores the risks of training data memorization in large language models used for federated learning. The study proposes a framework to measure both intra-client and inter-client memorization, addressing limitations of existing methods that only consider single samples. Findings indicate that federated learning models do memorize client data, with intra-client memorization being more prevalent than inter-client, and that factors like decoding strategies and FL algorithms influence this memorization. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Introduces a new method to quantify data memorization risks in federated learning, potentially impacting privacy-preserving AI development.

RANK_REASON This is a research paper published on arXiv detailing a new framework for measuring data memorization in federated learning models. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.LG →

COVERAGE [1]

  1. arXiv cs.LG TIER_1 · Tinnakit Udsa, Can Udomcharoenchaikit, Patomporn Payoungkhamdee, Sarana Nutanong, Norrathep Rattanavipanon ·

    Exploring Cross-Client Memorization of Training Data in Large Language Models for Federated Learning

    arXiv:2510.08750v2 Announce Type: replace Abstract: Federated learning (FL) enables collaborative training without raw data sharing, but still risks training data memorization. Existing FL memorization detection techniques focus on one sample at a time, underestimating more subtl…