USS Marlin
PulseAugur coverage of USS Marlin — every cluster mentioning USS Marlin across labs, papers, and developer communities, ranked by signal.
No coverage in the last 90 days.
-
PACZero enables PAC-private fine-tuning of language models with usable utility
Researchers have developed PACZero, a novel method for fine-tuning large language models that offers strong privacy guarantees. This approach utilizes sign quantization of gradients to achieve a privacy regime where mem…
-
Lost in State Space: Probing Frozen Mamba Representations
A new research paper investigates the internal workings of Mamba, a recurrent neural network architecture. The study tested the hypothesis that Mamba's state could directly yield semantic sentence summaries without addi…
-
LoRA fine-tuning research suggests rank 1 is sufficient, proposes data-aware initialization
Three new research papers explore methods to optimize LoRA fine-tuning for large language models. One paper proposes reducing the LoRA rank threshold to 1 for binary classification tasks, showing competitive performance…
-
New theory reveals inherent geometric blind spot in supervised learning
Researchers have identified a fundamental geometric limitation in supervised learning, termed the "geometric blind spot." This theoretical finding demonstrates that standard supervised learning objectives inherently ret…