Researchers have identified a new privacy vulnerability in machine learning models, stemming from the output label space rather than the training data itself. This side-channel becomes particularly relevant in continual learning scenarios where the label space evolves over time. To address this, the paper proposes formalizing differential privacy for continual learning and introduces two methods to mitigate the leakage: applying differential privacy to label releases or utilizing a large public label space. Experiments on Split-CIFAR-100 and Split-ImageNet-R demonstrate improved accuracy and privacy guarantees compared to existing approaches. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Introduces a novel privacy attack vector and mitigation strategies for ML models, particularly in continual learning settings.
RANK_REASON Academic paper detailing a new privacy vulnerability and mitigation techniques for machine learning models.