PulseAugur
LIVE 08:00:45
research · [1 source] ·
0
research

Fireworks AI offers GLM 5.1 with 200K context for agentic coding

Fireworks AI is now offering GLM 5.1 through its training platform. This model supports both managed and training API workflows, allowing users to fine-tune with custom loss functions or smart defaults. GLM 5.1 features a 200K context window, making it suitable for long-horizon agentic coding tasks. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Enables fine-tuning of long-context models for agentic coding tasks.

RANK_REASON Release of a model with specific capabilities (200K context window) for fine-tuning, offered via a platform.

Read on X — Fireworks (inference infra) →

COVERAGE [1]

  1. X — Fireworks (inference infra) TIER_1 · FireworksAI_HQ ·

    GLM 5.1 from @Zai_org is now available on @FireworksAI_HQ Training Platform across the Managed and Training API workflows.

    GLM 5.1 from @Zai_org is now available on @FireworksAI_HQ Training Platform across the Managed and Training API workflows. Try SFT and DPO with smart defaults or your own custom loss function with a 200K context window, perfect for long-horizon agentic coding fine-tunes. RL