PulseAugur
LIVE 01:46:31
ENTITY attention

attention

PulseAugur coverage of attention — every cluster mentioning attention across labs, papers, and developer communities, ranked by signal.

Total · 30d
85
85 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
75
75 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 5 TOTAL
  1. TOOL · CL_16032 ·

    Rhamba framework integrates attention and Mamba for fMRI self-supervised learning

    Researchers have developed Rhamba, a novel framework for self-supervised learning on resting-state fMRI data. This framework combines region-aware masking with hybrid Attention-Mamba architectures to improve the analysi…

  2. RESEARCH · CL_06711 ·

    Switch Attention dynamically routes between full and sliding window attention

    Researchers have introduced Switch Attention (SwiAttn), a novel hybrid transformer architecture designed to address the computational bottleneck of standard full attention mechanisms in long-context language modeling. S…

  3. RESEARCH · CL_05188 ·

    Beyond Linearity in Attention Projections: The Case for Nonlinear Queries

    Researchers are exploring the fundamental mechanisms behind transformer attention, with new papers analyzing its gradient flow structure and dynamics. One study interprets attention as a gradient flow on a unit sphere, …

  4. COMMENTARY · CL_04670 ·

    Eugene Yan shares guide to running weekly AI paper club for learning communities

    Eugene Yan details a successful weekly paper club that has met for 18 months, discussing at least 80 AI-related papers. The club focuses on foundational concepts, models, training, and inference techniques within machin…

  5. RESEARCH · CL_04837 ·

    Mamba model offers Transformer-level performance with faster inference and longer context

    Mamba, a new State Space Model (SSM), presents an alternative to the dominant Transformer architecture in AI. It aims to match Transformer performance and scaling laws while efficiently handling extremely long sequences…