WikiText-103
PulseAugur coverage of WikiText-103 — every cluster mentioning WikiText-103 across labs, papers, and developer communities, ranked by signal.
-
New parameter E predicts Mixture-of-Experts model health, preventing dead experts.
Researchers have introduced a new dimensionless control parameter, E = T*H/(O+B), to predict the health of expert ecologies in Mixture-of-Experts (MoE) models. This parameter, derived from four hyperparameters, can prev…
-
Jordan-RoPE: Non-Semisimple Relative Positional Encoding via Complex Jordan Blocks
Researchers have introduced Jordan-RoPE, a novel relative positional encoding method for transformer models that utilizes complex Jordan blocks. This approach generates oscillatory-polynomial features, enabling a distan…
-
New framework uses masked language models for efficient wireless token communication
Researchers have developed a novel context-aware wireless token communication framework that utilizes a masked language model (MLM) to improve transmission efficiency. This system enables robust token inference over noi…
-
Researchers explore weight decay, in-context learning, and acceleration for Transformer models
Researchers have developed several new methods to improve the efficiency and theoretical understanding of Transformer models. One paper provides a functional-analytic characterization of weight decay, demonstrating its …
-
Phase-Associative Memory: Sequence Modeling in Complex Hilbert Space
Researchers have introduced a novel complex-valued sequence model called Phase-Associative Memory (PAM) that utilizes a Hilbert space formalism to better capture the indeterminate nature of semantic expression meaning. …
-
AutoCompress method isolates critical transformer layers for efficient compression
Researchers have developed AutoCompress, a novel method for compressing transformer models by isolating and preserving the critical first layer (Layer 0). This approach, termed Critical Layer Isolation (CLI), showed tha…