PulseAugur
LIVE 06:30:21
tool · [1 source] ·
0
tool

Neuroevolution framework boosts LLM output diversity via prompt embedding evolution

Researchers have developed QD-LLM, a novel framework that uses parameter-efficient neuroevolution to enhance the diversity of outputs from large language models. This method evolves compact prompt embeddings, which act as interfaces to steer large, frozen LLMs without requiring full model fine-tuning. The system employs a Quality-Diversity optimization approach with hybrid behavior characterization and co-evolutionary operators, demonstrating significant improvements in output coverage and quality scores across various benchmarks. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Enhances LLM output diversity and quality, potentially improving downstream applications like test generation and fine-tuning.

RANK_REASON The cluster contains an academic paper detailing a new method for improving LLM output diversity. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.CL →

COVERAGE [1]

  1. arXiv cs.CL TIER_1 · Siu Ming Yiu ·

    Parameter-Efficient Neuroevolution for Diverse LLM Generation: Quality-Diversity Optimization via Prompt Embedding Evolution

    Large Language Models exhibit mode collapse, producing homogeneous outputs that fail to explore valid solution spaces. We present QD-LLM, a framework for parameter-efficient neuroevolution that evolves prompt embeddings, compact neural interfaces (~32K parameters) that steer gene…