WikiText-2
PulseAugur coverage of WikiText-2 — every cluster mentioning WikiText-2 across labs, papers, and developer communities, ranked by signal.
1 day(s) with sentiment data
-
New BCJR-QAT method pushes LLM quantization to 2 bits per weight
Researchers have developed BCJR-QAT, a novel method for quantizing large language models to 2 bits per weight, a significant advancement beyond current post-training quantization techniques. This new approach uses a dif…
-
New parameter E predicts Mixture-of-Experts model health, preventing dead experts.
Researchers have introduced a new dimensionless control parameter, E = T*H/(O+B), to predict the health of expert ecologies in Mixture-of-Experts (MoE) models. This parameter, derived from four hyperparameters, can prev…
-
New MetaAdamW optimizer uses self-attention for adaptive learning rates
Researchers have developed MetaAdamW, a novel optimizer that enhances adaptive learning rates and weight decay by employing a self-attention mechanism. This Transformer-based approach dynamically adjusts hyperparameters…
-
Associative-State Universal Transformers improve parameter efficiency with sparse retrieval
Researchers have developed UniMatrix, a novel Universal Transformer architecture that integrates structured recurrence with sparse retrieval mechanisms. While initial versions showed parameter efficiency and competitive…