PulseAugur
LIVE 06:19:20
research · [2 sources] ·
0
research

Telegraph English compresses prompts with structured symbols, outperforming LLMLingua-2

Researchers have developed a new prompt compression protocol called Telegraph English (TE), which rewrites natural language into a structured dialect using logical symbols. Unlike methods that delete tokens, TE decomposes input into atomic facts and substitutes phrases with symbols, adapting compression to information density. Evaluations on LongBench-v2 with OpenAI models showed TE preserves 99.1% accuracy at a 50% token reduction and outperforms existing methods, particularly on smaller models. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT This method could significantly reduce token usage for LLM inputs, potentially lowering costs and improving efficiency, especially for smaller models.

RANK_REASON The cluster contains a new academic paper detailing a novel method for prompt compression.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Mikhail L. Arbuzov, Sisong Bei, Ziwei Dong, Dmitri Kalaev, Alexey A. Shvets ·

    Telegraph English: Semantic Prompt Compression via Structured Symbolic Rewriting

    arXiv:2605.04426v1 Announce Type: new Abstract: We introduce Telegraph English (TE), a prompt-compression protocol that rewrites natural language into a symbol-rich, formally-structured dialect. Where token-deletion methods such as LLMLingua-2 train a classifier to delete low-imp…

  2. arXiv cs.CL TIER_1 · Alexey A. Shvets ·

    Telegraph English: Semantic Prompt Compression via Structured Symbolic Rewriting

    We introduce Telegraph English (TE), a prompt-compression protocol that rewrites natural language into a symbol-rich, formally-structured dialect. Where token-deletion methods such as LLMLingua-2 train a classifier to delete low-importance tokens at a fixed ratio, TE performs a f…