PulseAugur
LIVE 00:28:31
commentary · [1 source] ·
5
commentary

LLMs are stateless, relying solely on context for knowledge

Large language models are stateless, meaning all their knowledge is derived from the context provided to them. This contrasts with humans, who possess a vast, lifetime's worth of context. The article speculates that if LLMs could process terabytes of experience, their operational cost could become significantly lower than that of humans, leading to potentially dire consequences. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Speculates on the potential economic and existential implications if LLMs could process human-scale context.

RANK_REASON The cluster contains an opinion piece discussing the stateless nature of LLMs and comparing them to human context.

Read on Mastodon — sigmoid.social →

COVERAGE [1]

  1. Mastodon — sigmoid.social TIER_1 · [email protected] ·

    Today I learned that LLMs are stateless. All knowledge is conveyed through context. We humans carry with us a lifetime’s worth of context—probably many terabyte

    Today I learned that LLMs are stateless. All knowledge is conveyed through context. We humans carry with us a lifetime’s worth of context—probably many terabytes. If we could feed terabytes of experience into LLMs, the cost of a human would be ridiculously low compared to an LLM.…