PulseAugur
LIVE 07:16:42
research · [2 sources] ·
0
research

FedAttr protocol enables privacy-preserving attribution in federated LLM fine-tuning

Researchers have developed FedAttr, a novel protocol designed to identify which clients in a federated learning setup have used watermarked data for fine-tuning large language models. This method addresses challenges in federated learning where secure aggregation typically obscures individual client contributions. FedAttr employs a paired-subset-difference mechanism to estimate client updates and a differential scoring approach with a watermark detector, achieving perfect true positive and zero false positive rates in empirical tests. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Enhances data ownership and attribution capabilities in collaborative LLM fine-tuning scenarios.

RANK_REASON This is a research paper detailing a new protocol for privacy-preserving attribution in federated LLM fine-tuning.

Read on arXiv cs.LG →

COVERAGE [2]

  1. arXiv cs.LG TIER_1 · Su Zhang, Junfeng Guo, Heng Huang ·

    FedAttr: Towards Privacy-preserving Client-Level Attribution in Federated LLM Fine-tuning

    arXiv:2605.06596v1 Announce Type: cross Abstract: Watermark radioactivity testing type of methods can detect whether a model was trained on watermarked documents, and have become key tools for protecting data ownership in the fine-tuning of large language models (LLMs). Existing …

  2. arXiv cs.LG TIER_1 · Heng Huang ·

    FedAttr: Towards Privacy-preserving Client-Level Attribution in Federated LLM Fine-tuning

    Watermark radioactivity testing type of methods can detect whether a model was trained on watermarked documents, and have become key tools for protecting data ownership in the fine-tuning of large language models (LLMs). Existing works have proved their effectiveness in centraliz…