PulseAugur
LIVE 05:54:18
ENTITY recurrent neural network

recurrent neural network

PulseAugur coverage of recurrent neural network — every cluster mentioning recurrent neural network across labs, papers, and developer communities, ranked by signal.

Total · 30d
8
8 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
8
8 over 90d
TIER MIX · 90D
SENTIMENT · 30D

1 day(s) with sentiment data

RECENT · PAGE 1/1 · 13 TOTAL
  1. RESEARCH · CL_27516 ·

    New RNN module boosts BCI accuracy and explainability

    Researchers have developed a new Post-Recurrent Module (PRM) to enhance the explainability and performance of Recurrent Neural Networks (RNNs) used in P300-based Brain-Computer Interfaces (BCIs). This module improves cl…

  2. TOOL · CL_24312 ·

    LSTM networks overcome RNN memory limitations with gating mechanisms

    The Long Short-Term Memory (LSTM) network was developed to address the limitations of traditional Recurrent Neural Networks (RNNs) in handling sequential data. Vanilla RNNs struggle with remembering information over lon…

  3. TOOL · CL_22129 ·

    Brain-inspired FRE-RNN makes Equilibrium Propagation more practical for AI

    Researchers have developed a new recurrent neural network architecture, the Feedback-regulated REsidual recurrent neural network (FRE-RNN), designed to improve the practicality of Equilibrium Propagation (EP) for brain-…

  4. TOOL · CL_15609 ·

    New CNN-Transformer Hybrid Model Enhances Spatiotemporal Prediction Efficiency

    Researchers have introduced a new Convolutional Neural Network (CNN) architecture called MIMO-ESP, designed to improve spatiotemporal prediction tasks. This model addresses limitations in existing CNNs, such as difficul…

  5. RESEARCH · CL_15416 ·

    ParaRNN offers interpretable, parallelizable recurrent neural networks for time-dependent data

    Researchers have introduced ParaRNN, a novel recurrent neural network designed for time-dependent data that aims to improve interpretability and parallelization. This model decomposes recurrent dynamics into distinct, i…

  6. RESEARCH · CL_14464 ·

    Deep Jacobian estimation method characterizes nonlinear control in biological systems

    Researchers have developed a new deep learning method called JacobianODE to estimate the Jacobian of dynamical systems from time-series data. This approach allows for a more nuanced understanding of control between inte…

  7. RESEARCH · CL_11923 ·

    Selective-Update RNNs match Transformer accuracy with greater efficiency

    Researchers have developed a new type of Recurrent Neural Network (RNN) called Selective-Update RNNs (suRNNs) that can efficiently handle long-range sequence modeling. Unlike traditional RNNs that update at every time s…

  8. RESEARCH · CL_06358 ·

    New non-Euclidean neural quantum states outperform Euclidean counterparts in VMC experiments

    Researchers have introduced new non-Euclidean neural quantum states (NQS) by extending previous work with Poincaré hyperbolic GRU to include Lorentz RNN, Lorentz GRU, and Poincaré RNN. These new hyperbolic NQS variants …

  9. RESEARCH · CL_10250 ·

    New frameworks offer gradient-free and hierarchical learning for stable deep network training

    Two new research papers propose alternative methods for training deep neural networks. One paper introduces a projection-based framework called PJAX, which treats training as a feasibility problem solvable through itera…

  10. RESEARCH · CL_05127 ·

    StateX framework boosts RNN recall by expanding model states post-training

    Researchers have developed StateX, a post-training framework designed to improve the recall capabilities of recurrent neural networks (RNNs). This method efficiently expands the states of pre-trained RNNs, such as linea…

  11. RESEARCH · CL_00875 ·

    RWKV project revives RNNs to challenge Transformer dominance in LLMs

    The RWKV (Receptance Weighted Key Value) project introduces a novel architecture that revives Recurrent Neural Networks (RNNs) while incorporating advantages typically found in Transformers. This approach aims to overco…

  12. COMMENTARY · CL_04787 ·

    Eugene Yan reviews OMSCS Machine Learning for Trading course, highlighting assignments and coding.

    Eugene Yan shares his experience and insights from the OMSCS CS7646 (Machine Learning for Trading) course. He highlights the course's focus on sequential modeling and its applicability beyond financial markets, such as …

  13. RESEARCH · CL_02615 ·

    OpenAI unveils VAEs for improved representation learning and density estimation

    OpenAI has published research on a Variational Autoencoder (VAE) that combines VAEs with autoregressive models like RNNs and PixelCNNs. This new VAE architecture allows for control over what the latent code learns, enab…