PulseAugur
LIVE 03:36:08
research · [1 source] ·
0
research

New research explores activation functions beyond ReLU in neural networks

A new paper explores the theoretical underpinnings of neural network kernels, specifically focusing on activation functions beyond the standard ReLU. Researchers characterized the Reproducing Kernel Hilbert Spaces (RKHS) for various non-smooth activation functions, extending existing theory to functions like SELU, ELU, and LeakyReLU. The findings indicate that many common activations result in equivalent RKHS across different network depths, while polynomial activations show depth-dependent RKHS. The study also provides insights into the smoothness of Neural Network Gaussian Process (NNGP) sample paths in infinitely wide networks. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Extends theoretical understanding of neural network behavior, potentially informing future model architectures and training strategies.

RANK_REASON This is a research paper published on arXiv detailing theoretical advancements in neural network kernels and activation functions.

Read on arXiv stat.ML →

COVERAGE [1]

  1. arXiv stat.ML TIER_1 · David Holzm\"uller, Max Sch\"olpple ·

    Beyond ReLU: How Activations Affect Neural Kernels and Random Wide Networks

    arXiv:2506.22429v2 Announce Type: replace Abstract: In recent years, the neural tangent kernel (NTK) and neural network Gaussian process kernel (NNGP) have given theoreticians tractable limiting cases of fully connected neural networks. However, the property of these kernels are …