Google DeepMind has introduced VaultGemma, a 1-billion parameter language model trained from scratch with differential privacy. This release is accompanied by research detailing new scaling laws for differentially private language models, which address the trade-offs between privacy, utility, and computational cost. The findings suggest that optimal training for privacy involves using smaller models with larger batch sizes than typically employed. VaultGemma's weights are publicly available on Hugging Face and Kaggle to foster further development in private AI. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
RANK_REASON Release of an open-source model with accompanying research paper on its training methodology.