PulseAugur
LIVE 03:46:21
research · [4 sources] ·
0
research

LLMs accelerate neural architecture search with novel delta-based code generation

Researchers are exploring novel methods for Neural Architecture Search (NAS) using Large Language Models (LLMs). One approach, SPARK, aims to improve LLM knowledge integration by explicitly selecting functional factors for modification, reducing unintended side effects and enhancing efficiency. Another technique, Delta-Code Generation, focuses on fine-tuning LLMs to produce compact code diffs that refine existing architectures rather than generating them from scratch, leading to significant reductions in code verbosity and computational cost. A survey also categorizes NAS methods based on efficiency, robustness, and continual learning, proposing a framework called HERCULES to guide future research in these areas. AI

Summary written by gemini-2.5-flash-lite from 4 sources. How we write summaries →

IMPACT New LLM-driven NAS techniques promise more efficient and robust model development, potentially accelerating AI system deployment.

RANK_REASON Multiple arXiv papers introduce new methods and surveys for Neural Architecture Search (NAS) using LLMs.

Read on arXiv cs.CV →

COVERAGE [4]

  1. arXiv cs.LG TIER_1 · Zhen Liu, Yuhan Liu, Jingwen Fu ·

    Structured Progressive Knowledge Activation for LLM-Driven Neural Architecture Search

    arXiv:2605.04057v1 Announce Type: new Abstract: This paper focuses on a key challenge in Neural Architecture Search (NAS): integrating established architectural knowledge while exploring new designs under expensive evaluations. Large language models (LLMs) are a promising assista…

  2. arXiv cs.LG TIER_1 · Matteo Gambella, Fabrizio Pittorino, Manuel Roveri ·

    HERCULES: Hardware-Efficient, Robust, Continual Learning Neural Architecture Search

    arXiv:2605.04103v1 Announce Type: new Abstract: Neural Architecture Search (NAS) has emerged as a powerful framework for automatically discovering neural architectures that balance accuracy and efficiency. However, as AI transitions from static benchmarks to real-world deployment…

  3. arXiv cs.LG TIER_1 · Santosh Premi Adhikari, Radu Timofte, Dmitry Ignatov ·

    Delta-Based Neural Architecture Search: LLM Fine-Tuning via Code Diffs

    arXiv:2605.04903v1 Announce Type: new Abstract: Large language models (LLMs) show strong potential for neural architecture generation, yet existing approaches produce complete model implementations from scratch -- computationally expensive and yielding verbose code. We propose De…

  4. arXiv cs.CV TIER_1 · Dmitry Ignatov ·

    Delta-Based Neural Architecture Search: LLM Fine-Tuning via Code Diffs

    Large language models (LLMs) show strong potential for neural architecture generation, yet existing approaches produce complete model implementations from scratch -- computationally expensive and yielding verbose code. We propose Delta-Code Generation, where fine-tuned LLMs gener…