PulseAugur
LIVE 08:24:46
research · [1 source] ·
0
research

Fireworks AI adds Google's Gemma 4 models to its training platform

Fireworks AI has announced the integration of Google DeepMind's Gemma 4 models, specifically the 26B and 31B parameter versions, into its training platform. This integration allows users to leverage the Fireworks Managed and Training API workflows for fine-tuning these models. The platform supports both Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) with customizable loss functions and a 256K context window, with Reinforcement Learning (RL) support expected soon. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Expands accessibility of Google's Gemma models for fine-tuning and research on a specialized platform.

RANK_REASON Release of a new model version (Gemma 4) from a major AI lab (Google DeepMind) made available on a third-party platform.

Read on X — Fireworks (inference infra) →

COVERAGE [1]

  1. X — Fireworks (inference infra) TIER_1 · FireworksAI_HQ ·

    Gemma 4 (26B + 31B) from @GoogleDeepMind is now available on the Fireworks Training Platform across the Managed and Training API workflows.

    Gemma 4 (26B + 31B) from @GoogleDeepMind is now available on the Fireworks Training Platform across the Managed and Training API workflows. Try SFT and DPO with smart defaults or your own custom loss function with a 256K context window. RL support landing soon! What would you