PulseAugur
LIVE 10:39:45
ENTITY Llama-3.2-3B

Llama-3.2-3B

PulseAugur coverage of Llama-3.2-3B — every cluster mentioning Llama-3.2-3B across labs, papers, and developer communities, ranked by signal.

Total · 30d
6
6 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
5
5 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 6 TOTAL
  1. TOOL · CL_23818 ·

    Developer fine-tunes Qwen 3B model to replicate personal writing style

    A developer has created a custom AI system to mimic their personal writing style, overcoming the limitations of prompt engineering. The system uses a two-model architecture: a frontier LLM like Claude Opus or Llama 70B …

  2. TOOL · CL_20380 ·

    Distributed output templates, not single positions, drive LLM in-context learning

    Researchers have demonstrated that in-context learning in large language models is driven by distributed output templates rather than single-position activations. Through multi-position intervention, they achieved up to…

  3. RESEARCH · CL_20498 ·

    LLMs show significant bias in conflict monitoring, not ready for deployment

    A new paper evaluates several large language models for their suitability in conflict monitoring tasks in West Africa. The study found that open-weight models like Gemma 3 4B and Llama 3.2 3B exhibit significant biases,…

  4. RESEARCH · CL_15892 ·

    New method debiases LLMs at decoding time, improving fairness without model retraining

    Researchers have developed a novel method to mitigate biases in large language models during the decoding phase, without altering the model's weights. This approach uses a separate Process Reward Model (PRM) to score to…

  5. TOOL · CL_15980 ·

    Llama-3.2-3B model achieves 92% accuracy in parsing blood donation requests

    Researchers have developed the Cognitive Blood Request System (CBRS), a framework designed to efficiently filter and parse urgent blood donation requests from social media streams. This system utilizes a novel bilingual…

  6. RESEARCH · CL_06702 ·

    Researchers propose efficient LLM classification probes to reduce latency and VRAM

    Researchers have developed a method to integrate classification tasks, such as safety checks, directly into the forward pass of large language models (LLMs). This approach uses lightweight probes trained on the LLM's in…