Fireworks AI has announced the integration of Google DeepMind's Gemma 4 models, specifically the 26B and 31B parameter versions, into its training platform. This integration allows users to leverage the Fireworks Managed and Training API workflows for fine-tuning these models. The platform supports both Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) with customizable loss functions and a 256K context window, with Reinforcement Learning (RL) support expected soon. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Expands accessibility of Google's Gemma models for fine-tuning and research on a specialized platform.
RANK_REASON Release of a new model version (Gemma 4) from a major AI lab (Google DeepMind) made available on a third-party platform.