PulseAugur
LIVE 12:25:16
research · [1 source] ·
0
research

Smol AI dissects its 72B parameter Smaug model

Smol AI has released Smaug-72B, a new large language model. This model is notable for its performance on various benchmarks, including achieving state-of-the-art results on the MT-Bench leaderboard. Smaug-72B was trained on a dataset of 1.5 trillion tokens and is available for research purposes. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

RANK_REASON Release of a 72B parameter model with benchmark results, not from a tier-1 lab.

Read on Smol AINews →

COVERAGE [1]

  1. Smol AINews TIER_1 ·

    The Dissection of Smaug (72B)

    **Abacus AI** launched **Smaug 72B**, a large finetune of **Qwen 1.0**, which remains unchallenged on the **Hugging Face Open LLM Leaderboard** despite skepticism from **Nous Research**. **LAION** introduced a local voice assistant model named **Bud-E** with a notable demo. The *…