PulseAugur
LIVE 08:52:12
research · [1 source] ·
0
research

Subquadratic launches 12M-token LLM, claims major architectural shift

Subquadratic, a Miami-based startup, has emerged from stealth claiming to have developed the first Large Language Model (LLM) that does not utilize quadratic attention. This architectural innovation reportedly enables the model to process a context window of 12 million tokens at a significantly reduced cost compared to existing frontier models. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Potential to drastically lower inference costs for LLMs with extremely long context windows.

RANK_REASON Startup announces a novel LLM architecture with a large context window and reduced cost. [lever_c_demoted from significant: ic=1 ai=1.0]

Read on Mastodon — fosstodon.org →

COVERAGE [1]

  1. Mastodon — fosstodon.org TIER_1 · [email protected] ·

    A Miami-based startup called Subquadratic came out of stealth last week with a single claim that’s either the most important architectural shift since the 2017

    A Miami-based startup called Subquadratic came out of stealth last week with a single claim that’s either the most important architectural shift since the 2017 transformer paper or the most sophisticated AI hype in recent memory. They say they’ve built the first LLM that doesn’t …