PulseAugur
LIVE 10:29:26
ENTITY MBPP

MBPP

PulseAugur coverage of MBPP — every cluster mentioning MBPP across labs, papers, and developer communities, ranked by signal.

Total · 30d
8
8 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
8
8 over 90d
TIER MIX · 90D
RELATIONSHIPS
SENTIMENT · 30D

2 day(s) with sentiment data

RECENT · PAGE 1/1 · 8 TOTAL
  1. TOOL · CL_30784 ·

    New framework CANTANTE optimizes LLM agent systems via credit attribution

    Researchers have introduced CANTANTE, a new framework designed to optimize multi-agent systems powered by large language models. This system addresses the challenge of assigning credit for performance by decomposing sys…

  2. RESEARCH · CL_30616 ·

    New AI wrapper guides release decisions for iterative workflows

    Researchers have developed a new statistical method to determine when AI workflows should release their outputs, particularly for systems that use iterative generate-evaluate-revise loops. This "always-valid release wra…

  3. TOOL · CL_27577 ·

    Neuroevolution framework boosts LLM output diversity via prompt embedding evolution

    Researchers have developed QD-LLM, a novel framework that uses parameter-efficient neuroevolution to enhance the diversity of outputs from large language models. This method evolves compact prompt embeddings, which act …

  4. TOOL · CL_18865 ·

    ReCode framework enhances AI code generation by rewarding reasoning processes

    Researchers have developed ReCode, a novel reinforcement learning framework designed to improve code generation by focusing on the reasoning process. This framework uses Contrastive Reasoning-Process Reward Learning (CR…

  5. RESEARCH · CL_11738 ·

    BoostLoRA method grows adapter rank to surpass full fine-tuning

    Researchers have introduced BoostLoRA, a novel parameter-efficient fine-tuning method designed to enhance model expressivity without increasing inference overhead. This technique iteratively trains and merges small adap…

  6. RESEARCH · CL_10517 ·

    IBM's new 8B Granite 4.1 model outperforms older 32B MoE version

    IBM has released Granite 4.1, a family of open-source language models designed for enterprise use, featuring three sizes (3B, 8B, and 30B parameters). Notably, the 8B dense model demonstrates performance matching or exc…

  7. RESEARCH · CL_06927 ·

    Think Anywhere in Code Generation

    Researchers have introduced "Think-Anywhere," a new reasoning mechanism for large language models that allows them to generate code by thinking at any point during the process, rather than just upfront. This approach ha…

  8. RESEARCH · CL_00258 ·

    LLMs advance code editing, generation, and bug detection with new techniques

    Researchers are exploring various methods to enhance Large Language Models (LLMs) for code-related tasks. One study evaluates locally deployed LLMs like LLaMA 3.2 and Mistral for Python bug detection, finding they can i…