PulseAugur
LIVE 12:28:25
research · [1 source] ·
0
research

Meta's unreleased Chameleon model rivals GPT-4o in omnimodal capabilities

Meta AI has reportedly developed an unreleased omnimodal model named Chameleon, which is designed to handle text, images, and audio inputs and outputs. This model is said to be comparable to OpenAI's GPT-4o in its capabilities, suggesting a significant advancement in multimodal AI research. While details are scarce, the existence of Chameleon indicates Meta's ongoing efforts to compete at the forefront of AI development. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

RANK_REASON Report on an unreleased model from a major lab, akin to a research paper or internal project reveal.

Read on Smol AINews →

COVERAGE [1]

  1. Smol AINews TIER_1 (ET) ·

    Chameleon: Meta's (unreleased) GPT4o-like Omnimodal Model

    **Meta AI FAIR** introduced **Chameleon**, a new multimodal model family with **7B** and **34B** parameter versions trained on **10T tokens** of interleaved text and image data enabling "early fusion" multimodality that can natively output any modality. While reasoning benchmarks…