PulseAugur
LIVE 12:26:26
research · [1 source] ·
0
research

Apple researchers develop efficient privacy loss accounting for subsampling

Apple researchers have developed a new method for more accurately accounting for privacy loss in machine learning models that use subsampling and random allocation. Their approach, detailed in a research paper, allows for efficient computation of privacy loss distributions, which can lead to tighter privacy parameters than existing methods. This advancement is particularly beneficial for training models using differentially private stochastic gradient descent (DP-SGD) and extends accurate privacy loss accounting to subsampling techniques. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

RANK_REASON The cluster contains an academic paper detailing a new method for privacy loss accounting in machine learning.

Read on Apple Machine Learning Research →

Apple researchers develop efficient privacy loss accounting for subsampling

COVERAGE [1]

  1. Apple Machine Learning Research TIER_1 ·

    Efficient Privacy Loss Accounting for Subsampling and Random Allocation

    We consider the privacy amplification properties of a sampling scheme in which a user’s data is used in k steps chosen randomly and uniformly from a sequence (or set) of t steps. This sampling scheme has been recently applied in the context of differentially private optimization …