Researchers are exploring novel methods for Neural Architecture Search (NAS) using Large Language Models (LLMs). One approach, SPARK, aims to improve LLM knowledge integration by explicitly selecting functional factors for modification, reducing unintended side effects and enhancing efficiency. Another technique, Delta-Code Generation, focuses on fine-tuning LLMs to produce compact code diffs that refine existing architectures rather than generating them from scratch, leading to significant reductions in code verbosity and computational cost. A survey also categorizes NAS methods based on efficiency, robustness, and continual learning, proposing a framework called HERCULES to guide future research in these areas. AI
Summary written by gemini-2.5-flash-lite from 4 sources. How we write summaries →
IMPACT New LLM-driven NAS techniques promise more efficient and robust model development, potentially accelerating AI system deployment.
RANK_REASON Multiple arXiv papers introduce new methods and surveys for Neural Architecture Search (NAS) using LLMs.