Researchers have developed AdaLeZO, a new framework designed to make Zeroth-Order (ZO) optimization more efficient for fine-tuning Large Language Models. This method addresses the slow convergence and high variance typically associated with ZO by dynamically allocating computational resources to the most sensitive layers of a model. AdaLeZO functions as a plug-and-play module, accelerating existing ZO optimizers by up to 3.0x without increasing memory usage. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON This is a research paper detailing a new optimization framework for LLMs.