Dora
PulseAugur coverage of Dora — every cluster mentioning Dora across labs, papers, and developer communities, ranked by signal.
-
Compress Then Adapt? No, Do It Together via Task-aware Union of Subspaces
Researchers have introduced JACTUS, a novel framework that unifies parameter-efficient fine-tuning (PEFT) and low-rank compression for adapting large pretrained models. Unlike sequential methods, JACTUS jointly optimize…
-
DORA system accelerates LLM reinforcement learning by 2-4x with novel asynchronous rollout
Researchers have developed DORA, a novel asynchronous reinforcement learning system designed to accelerate language model training. DORA addresses the bottleneck caused by long-tailed trajectories in the rollout phase b…
-
ShadowPEFT offers new parameter-efficient fine-tuning for LLMs
Researchers have introduced ShadowPEFT, a novel parameter-efficient fine-tuning method for large language models. Unlike existing techniques that modify individual weights, ShadowPEFT employs a centralized framework wit…