PulseAugur
LIVE 14:01:02
research · [1 source] ·
0
research

Smol AI's QwQ-32B model claims performance parity with DeepSeek R1-671B

A new open-source model named QwQ-32B has been released, claiming performance parity with the significantly larger DeepSeek R1-671B model. This development suggests a potential leap in efficiency for large language models, enabling comparable capabilities with a fraction of the parameters. The release highlights the ongoing progress in making powerful AI models more accessible and computationally feasible. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

RANK_REASON Release of a new open-source model claiming to match a larger, established model's performance.

Read on Smol AINews →

COVERAGE [1]

  1. Smol AINews TIER_1 ·

    QwQ-32B claims to match DeepSeek R1-671B

    **Alibaba Qwen** released their **QwQ-32B** model, a **32 billion parameter** reasoning model using a novel two-stage reinforcement learning approach: first scaling RL for math and coding tasks with accuracy verifiers and code execution servers, then applying RL for general capab…