A new paper explores the theoretical underpinnings of neural network kernels, specifically focusing on activation functions beyond the standard ReLU. Researchers characterized the Reproducing Kernel Hilbert Spaces (RKHS) for various non-smooth activation functions, extending existing theory to functions like SELU, ELU, and LeakyReLU. The findings indicate that many common activations result in equivalent RKHS across different network depths, while polynomial activations show depth-dependent RKHS. The study also provides insights into the smoothness of Neural Network Gaussian Process (NNGP) sample paths in infinitely wide networks. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Extends theoretical understanding of neural network behavior, potentially informing future model architectures and training strategies.
RANK_REASON This is a research paper published on arXiv detailing theoretical advancements in neural network kernels and activation functions.