PulseAugur
LIVE 06:51:26
ENTITY Dora

Dora

PulseAugur coverage of Dora — every cluster mentioning Dora across labs, papers, and developer communities, ranked by signal.

Total · 30d
26
26 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
7
7 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 3 TOTAL
  1. RESEARCH · CL_16287 ·

    Compress Then Adapt? No, Do It Together via Task-aware Union of Subspaces

    Researchers have introduced JACTUS, a novel framework that unifies parameter-efficient fine-tuning (PEFT) and low-rank compression for adapting large pretrained models. Unlike sequential methods, JACTUS jointly optimize…

  2. RESEARCH · CL_10233 ·

    DORA system accelerates LLM reinforcement learning by 2-4x with novel asynchronous rollout

    Researchers have developed DORA, a novel asynchronous reinforcement learning system designed to accelerate language model training. DORA addresses the bottleneck caused by long-tailed trajectories in the rollout phase b…

  3. RESEARCH · CL_05419 ·

    ShadowPEFT offers new parameter-efficient fine-tuning for LLMs

    Researchers have introduced ShadowPEFT, a novel parameter-efficient fine-tuning method for large language models. Unlike existing techniques that modify individual weights, ShadowPEFT employs a centralized framework wit…