Two new research papers propose methods to improve black-box knowledge distillation, a technique for compressing large AI models into smaller ones without direct access to the teacher model's training data. The first paper introduces a generative adversarial network scheme that adaptively selects high-confidence images to enhance diversity in the distillation set. The second paper presents a three-phase framework called DIP-KD, which synthesizes image priors, uses contrastive learning, and employs a primer student for distillation, also emphasizing data diversity. Both approaches report state-of-the-art results on various benchmarks. AI
Summary written by gemini-2.5-flash-lite from 5 sources. How we write summaries →
IMPACT These methods could enable more efficient model compression in scenarios with limited data access, potentially lowering deployment costs for complex AI systems.
RANK_REASON Two academic papers published on arXiv propose novel methods for black-box knowledge distillation.