PulseAugur
LIVE 09:06:16
ENTITY CIFAR-10

CIFAR-10

PulseAugur coverage of CIFAR-10 — every cluster mentioning CIFAR-10 across labs, papers, and developer communities, ranked by signal.

Total · 30d
63
63 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
63
63 over 90d
TIER MIX · 90D
RELATIONSHIPS
SENTIMENT · 30D

4 day(s) with sentiment data

RECENT · PAGE 2/3 · 54 TOTAL
  1. RESEARCH · CL_21948 ·

    New AI unlearning methods balance data removal with model utility

    Researchers have developed new methods for machine unlearning, a process that removes specific data from AI models without full retraining. One approach, SHRED, uses self-distillation and logit demotion to identify and …

  2. RESEARCH · CL_16055 ·

    New research explores ensemble models for improved AI performance and robustness

    Two new research papers introduce novel methods for improving ensemble models in machine learning. The first, PACE, combines pruning and compression techniques to create more efficient and interpretable ensembles, outpe…

  3. TOOL · CL_15706 ·

    Checkerboard attack offers efficient, learning-free backdoor for deep learning models

    Researchers have developed a new method called Checkerboard for launching clean-label backdoor attacks on deep learning models. This learning-free technique uses a closed-form checkerboard trigger derived from linear se…

  4. TOOL · CL_15639 ·

    New HyCAS defense bridges gap between certified and empirical adversarial robustness

    Researchers have developed a new adversarial defense technique called Hybrid Convolutions with Attention Stochasticity (HyCAS). This method aims to bridge the gap between theoretical robustness guarantees and practical …

  5. RESEARCH · CL_14406 ·

    ROSA optical neural network architecture boosts efficiency and robustness

    Researchers have introduced ROSA, a novel microring-based optical neural network architecture designed for enhanced robustness and energy efficiency. This design incorporates an optical shift-and-add module and a layer-…

  6. RESEARCH · CL_14337 ·

    Vision Transformers leverage DCT for improved attention and efficiency

    Researchers have developed a novel approach using the Discrete Cosine Transform (DCT) to enhance Vision Transformers. This method includes a DCT-based initialization strategy for self-attention, which improves classific…

  7. RESEARCH · CL_14418 ·

    Kernel Hopfield networks show high storage capacity, stability limits analyzed

    Researchers have analyzed the geometric properties and storage capacity limits of kernel Hopfield networks trained with Kernel Logistic Regression (KLR). Their experiments, using random sequences and CIFAR-10 image embe…

  8. RESEARCH · CL_11881 ·

    New research reveals implicit bias drives neural scaling laws in deep learning

    Researchers have identified two new dynamical scaling laws that describe how neural network performance changes with complexity measures throughout training. These laws, observed across various architectures like CNNs a…

  9. RESEARCH · CL_11892 ·

    New method corrects subsampling bias in drifting generative models

    Researchers have developed Analytical Bias Correction (ABC), a method to address subsampling bias in drifting models, which are used for one-step generative tasks. The bias arises from using minibatches to estimate cent…

  10. RESEARCH · CL_11689 ·

    New DALS framework optimizes learning rates for neural network training

    Researchers have introduced a new framework called Discriminative Adaptive Layer Scaling (DALS) to optimize learning rates in neural networks. DALS categorizes the evolution of learning rate strategies into five generat…

  11. RESEARCH · CL_11404 ·

    Decoupled Descent: Exact Test Error Tracking Via Approximate Message Passing

    Researchers have developed a new training algorithm called Decoupled Descent (DD) that aims to eliminate the generalization gap in parametric models. DD uses approximate message passing theory to cancel biases caused by…

  12. RESEARCH · CL_11405 ·

    Linear-Core Surrogates offer smooth loss functions with linear rates for classification

    Researchers have introduced Linear-Core (LC) Surrogates, a novel family of convex loss functions designed to combine the benefits of smooth and piecewise-linear losses in machine learning. These surrogates are different…

  13. RESEARCH · CL_10213 ·

    New Federated Learning method enhances robustness against adversarial attacks

    Researchers have developed a new method for robust federated learning that can withstand adversarial attacks. The approach, called Loss-Based Client Clustering, requires only two honest participants, such as the server …

  14. RESEARCH · CL_09896 ·

    NeuroPlastic optimizer enhances deep learning with biologically inspired plasticity

    Researchers have developed NeuroPlastic, a novel optimization algorithm for deep learning that draws inspiration from biological synaptic plasticity. This method augments standard gradient-based updates with a multi-sig…

  15. RESEARCH · CL_08645 ·

    New UCB strategies enhance adaptive deep neural networks for edge computing

    Researchers have introduced four new Upper Confidence Bound (UCB) strategies to Adaptive Deep Neural Networks (ADNNs) for edge computing environments. These strategies, including UCB-Bayes, UCB-Tuned, and UCB-V, aim to …

  16. RESEARCH · CL_08186 ·

    QB-LIF neuron boosts SNN efficiency with learnable scale and burst spiking

    Researchers have introduced QB-LIF, a novel neuron model for spiking neural networks (SNNs) that addresses the information throughput limitations of binary spike coding. QB-LIF reformulates burst spiking using a learnab…

  17. RESEARCH · CL_08192 ·

    Vision SmolMamba uses spike-guided pruning for energy-efficient vision models

    Researchers have introduced Vision SmolMamba, a novel energy-efficient spiking state-space architecture designed for visual modeling. This architecture integrates spike-driven dynamics with linear-time selective recurre…

  18. RESEARCH · CL_08339 ·

    Researchers analyze Adam's tradeoffs and enhance SignSGD with hybrid switching strategy

    Two new research papers explore advancements in optimization algorithms for machine learning. One paper provides a theoretical analysis of the Adam optimizer, detailing its performance under non-stationary objectives an…

  19. RESEARCH · CL_06561 ·

    Researchers develop POUR, a provably optimal method for unlearning AI representations

    Researchers have developed a new method called POUR (Provably Optimal Unlearning of Representations) to effectively remove specific concepts or training data from machine learning models without requiring a full retrain…

  20. RESEARCH · CL_06463 ·

    Learn&Drop method halves CNN training time by dropping layers

    Researchers have developed a novel method called Learn&Drop to accelerate the training of Convolutional Neural Networks (CNNs). This technique dynamically assesses layer parameter changes during training and scales down…