Mistral AI has released Mixtral 8x7B, a sparse mixture-of-experts (SMoE) large language model. This model demonstrates strong performance, outperforming Llama 2 70B on many benchmarks while using significantly less compute during inference. The model is available under the Apache 2.0 license, allowing for commercial use. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON Release of a new open-source model with performance benchmarks.