PulseAugur
LIVE 00:53:23
ENTITY chūnibyō

chūnibyō

PulseAugur coverage of chūnibyō — every cluster mentioning chūnibyō across labs, papers, and developer communities, ranked by signal.

Total · 30d
0
0 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
0
0 over 90d
TIER MIX · 90D

No coverage in the last 90 days.

RECENT · PAGE 1/1 · 6 TOTAL
  1. RESEARCH · CL_21756 ·

    New research challenges independence assumption in Deep Q-Learning algorithms

    Researchers have developed a new statistical analysis for Deep Q-Networks (DQN) that accounts for temporal dependence in training data. This approach models minibatches as $\tau$-mixing, moving beyond the typical assump…

  2. TOOL · CL_18574 ·

    Reinforcement learning enhances autonomous target tracking accuracy and robustness

    Researchers have developed a deep reinforcement learning approach for autonomous bearings-only tracking of moving targets. The system formulates the observer maneuver problem as a belief Markov decision process, using a…

  3. TOOL · CL_18629 ·

    NaviGNN AI framework optimizes sustainable mobility in futuristic smart cities

    Researchers have developed NaviGNN, a novel AI system designed to optimize mobility in futuristic smart cities with complex vertical and linear structures. This system integrates multi-agent reinforcement learning and g…

  4. RESEARCH · CL_16192 ·

    AI routing framework boosts LEO satellite network performance and efficiency

    Researchers have developed a novel spatial-temporal learning-based distributed routing framework designed for dynamic Low Earth Orbit (LEO) satellite networks. This framework integrates Graph Attention Networks (GAT) an…

  5. RESEARCH · CL_11904 ·

    New C++ engine HASE achieves 33M steps/sec for multi-agent RL training

    Researchers have developed a new C++ engine called Hide-And-Seek-Engine (HASE) designed to significantly improve the efficiency of training reinforcement learning agents in decentralized, partially observable environmen…

  6. RESEARCH · CL_02556 ·

    OpenAI and researchers reveal AI vulnerabilities to adversarial attacks

    OpenAI researchers are exploring the transferability of adversarial robustness across different types of perturbations in neural networks. Their findings indicate that robustness against one perturbation type does not a…