Apple researchers have developed a new method for more accurately accounting for privacy loss in machine learning models that use subsampling and random allocation. Their approach, detailed in a research paper, allows for efficient computation of privacy loss distributions, which can lead to tighter privacy parameters than existing methods. This advancement is particularly beneficial for training models using differentially private stochastic gradient descent (DP-SGD) and extends accurate privacy loss accounting to subsampling techniques. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON The cluster contains an academic paper detailing a new method for privacy loss accounting in machine learning.