PulseAugur
LIVE 06:38:37
ENTITY DeepSeek V3.2

DeepSeek V3.2

PulseAugur coverage of DeepSeek V3.2 — every cluster mentioning DeepSeek V3.2 across labs, papers, and developer communities, ranked by signal.

Total · 30d
3
3 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
3
3 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 5 TOTAL
  1. TOOL · CL_22192 ·

    Zyphra's ZAYA1-8B model matches larger rivals with 700M active parameters

    Zyphra has released ZAYA1-8B, a reasoning-focused mixture-of-experts model with 700 million active parameters. The model was trained from scratch on an AMD compute platform and utilizes a novel four-stage reinforcement …

  2. TOOL · CL_18561 ·

    LLMs show genre bias, misclassifying entertainment news as fake

    A new research paper investigates whether large language models exhibit skepticism towards entertainment news, finding that some frontier models are more prone to misclassifying legitimate entertainment articles as fake…

  3. RESEARCH · CL_06722 ·

    Frontier LLMs like GPT-5.4 and Claude Opus 4.7 show significant verbal tics

    A new paper analyzes the prevalence of verbal tics, such as repetitive phrases and sycophantic openers, in eight leading large language models. Researchers developed a Verbal Tic Index (VTI) to quantify these tics, find…

  4. FRONTIER RELEASE · CL_02784 ·

    DeepSeek V4 models offer high performance with reduced inference costs and NPU support

    DeepSeek has released its V4 family of open-weight large language models, featuring a 1.6 trillion parameter model and a smaller 284 billion parameter Flash MoE model. These new models claim to rival top proprietary LLM…

  5. FRONTIER RELEASE · CL_00200 ·

    Google, DeepSeek, and arXiv papers explore agent learning and memory

    DeepSeek has released two new open-weight models, V4-Pro and V4-Flash, featuring a 1 million token context window and Mixture of Experts architecture. These models are significantly larger than previous DeepSeek release…