PulseAugur
LIVE 06:53:08
research · [3 sources] ·
0
research

CoQuant paper introduces joint projection for efficient LLM mixed-precision quantization

Researchers have introduced CoQuant, a novel method for mixed-precision quantization in Large Language Models (LLMs). This technique addresses limitations in existing approaches by jointly considering both weight and activation statistics to identify critical subspaces for high-precision preservation. CoQuant utilizes a theoretically modeled error and a weighted PCA solution to balance these covariances, aiming to reduce inference costs more effectively. Experiments on Llama-3.2 and Qwen2.5 models demonstrate CoQuant's superior performance in perplexity and reasoning accuracy compared to current post-training quantization baselines. AI

Summary written by gemini-2.5-flash-lite from 3 sources. How we write summaries →

IMPACT Improves LLM efficiency by reducing inference costs through optimized mixed-precision quantization.

RANK_REASON The cluster contains an academic paper detailing a new method for LLM quantization.

Read on arXiv cs.LG →

COVERAGE [3]

  1. arXiv cs.LG TIER_1 · Zhe Ding, Su Pan, Duowei Pan ·

    CoQuant: Joint Weight-Activation Subspace Projection for Mixed-Precision LLMs

    arXiv:2604.26378v1 Announce Type: new Abstract: Post-training quantization (PTQ) has become an important technique for reducing the inference cost of Large Language Models (LLMs). While recent mixed-precision methods improve ultra-low bit quantization by preserving critical subsp…

  2. arXiv cs.LG TIER_1 · Duowei Pan ·

    CoQuant: Joint Weight-Activation Subspace Projection for Mixed-Precision LLMs

    Post-training quantization (PTQ) has become an important technique for reducing the inference cost of Large Language Models (LLMs). While recent mixed-precision methods improve ultra-low bit quantization by preserving critical subspaces in high precision, they typically construct…

  3. Hugging Face Daily Papers TIER_1 ·

    CoQuant: Joint Weight-Activation Subspace Projection for Mixed-Precision LLMs

    Post-training quantization (PTQ) has become an important technique for reducing the inference cost of Large Language Models (LLMs). While recent mixed-precision methods improve ultra-low bit quantization by preserving critical subspaces in high precision, they typically construct…