PulseAugur
LIVE 12:29:39
research · [1 source] ·
0
research

Smol AI releases OLMo 2, a new state-of-the-art fully open LLM

Allen Institute for AI has released OLMo 2, a new open-source large language model that achieves state-of-the-art performance. The model is available for research and commercial use, with weights and code publicly accessible. OLMo 2 was trained on a massive dataset and demonstrates strong capabilities across various benchmarks. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

RANK_REASON Release of a new open-source LLM from a research institution, not a frontier lab.

Read on Smol AINews →

COVERAGE [1]

  1. Smol AINews TIER_1 ·

    OLMo 2 - new SOTA Fully Open LLM

    **AI2** has updated **OLMo-2** to roughly **Llama 3.1 8B** equivalent, training with **5T tokens** and using learning rate annealing and new high-quality data (Dolmino). They credit **Tülu 3** and its "Reinforcement Learning with Verifiable Rewards" approach. On Reddit, **Qwen2.5…