GQA
PulseAugur coverage of GQA — every cluster mentioning GQA across labs, papers, and developer communities, ranked by signal.
1 day(s) with sentiment data
-
Transformer LLM Architectures Converge on Standard Stack
A recent analysis of 53 large language models from 2017 to 2025 reveals a significant convergence in transformer architectures. Key elements of this de facto standard include pre-normalization (RMSNorm), Rotary Position…
-
IBM releases Granite 4.1 LLMs with 512K context and Apache 2.0 license
IBM has released the Granite 4.1 family of large language models, comprising 3B, 8B, and 30B parameter versions. These models were trained on approximately 15 trillion tokens through a five-stage pre-training process th…
-
BLASST paper introduces dynamic sparse attention for faster LLM inference
Researchers have developed BLASST, a novel sparse attention mechanism designed to accelerate inference for large language models with long contexts. This drop-in solution dynamically skips attention blocks using a simpl…
-
Kwai Summary Attention compresses historical contexts for efficient long-context LLMs
Researchers have introduced Kwai Summary Attention (KSA), a novel attention mechanism designed to address the quadratic time complexity of standard softmax attention in large language models. KSA aims to maintain a line…
-
DeepSeek benchmarks MLA vs GQA on A100, revealing bandwidth-quality tradeoff
A technical analysis explores DeepSeek's decision to utilize MLA (Multi-Head Linear Attention) over GQA (Grouped-Query Attention) in their models. The author highlights this choice as a strategic trade-off between compu…
-
DeepSeek-V4, LoRA, and other LLM techniques detailed in new blogs
A series of six blog posts has been published on Outcome School, detailing fundamental components of contemporary large language models. The posts cover technical concepts such as RMSNorm, DeepSeek-V4, LoRA, RoPE, GQA, …