DP-SGD
PulseAugur coverage of DP-SGD — every cluster mentioning DP-SGD across labs, papers, and developer communities, ranked by signal.
3 day(s) with sentiment data
-
New theory bounds KAN training, reveals privacy-utility gap
Researchers have established new theoretical bounds for training Kolmogorov-Arnold Networks (KANs), a structured alternative to standard MLPs. The work analyzes KANs trained with mini-batch stochastic gradient descent (…
-
New DP-LAC method enhances private federated LLM fine-tuning
Researchers have developed DP-LAC, a new method for differentially private federated fine-tuning of language models. This technique improves upon existing adaptive clipping methods by estimating an initial clipping thre…
-
New DP-SGD subsampling methods offer improved privacy-utility trade-offs
Two new research papers explore optimized subsampling techniques for Differentially Private Stochastic Gradient Descent (DP-SGD). The first paper, focusing on random shuffling, provides tight upper and lower bounds with…
-
Researchers reveal supply-chain attacks can steal secrets from local LLM fine-tuning
Researchers have developed a novel method to steal sensitive information from locally fine-tuned large language models by exploiting vulnerabilities in their supply chain code. This technique moves beyond passive weight…