stochastic gradient descent
PulseAugur coverage of stochastic gradient descent — every cluster mentioning stochastic gradient descent across labs, papers, and developer communities, ranked by signal.
1 day(s) with sentiment data
-
New method tackles unbounded variance in variational inference
Researchers have developed a new approach to optimize Black-Box Variational Inference (BBVI) by addressing the inherent unbounded variance in its stochastic gradients. Their method, detailed in a new paper, focuses on t…
-
New research derives advanced optimizers from evolutionary principles
Researchers have developed a new method to derive advanced optimization algorithms directly from evolutionary principles, unifying previously disparate views of evolution. This approach introduces Darwinian Lineage Simu…
-
Bayesian Parameter Shift Rule enhances VQE gradient estimation
Researchers have introduced a Bayesian variant of the parameter shift rule (PSR) for variational quantum eigensolvers (VQEs). This new method utilizes Gaussian processes to estimate objective function gradients, offerin…
-
Researchers explore efficient parameter estimation for truncated Boolean product distributions
Researchers have developed a new method for estimating parameters of truncated Boolean product distributions, a problem previously unaddressed in discrete settings. The approach relies on a concept of 'fatness' for the …
-
Researchers develop novel bootstrap for SGD confidence sets
Researchers have developed a novel method for constructing confidence sets in Stochastic Gradient Descent (SGD) algorithms. This new approach utilizes the multiplier bootstrap procedure and establishes its non-asymptoti…
-
Evolutionary game theory deciphers shortcut learning in deep neural networks
Researchers have developed a new theoretical framework using evolutionary game theory to understand shortcut learning in deep neural networks. The study formally defines core and shortcut features, modeling data samples…
-
New method bridges graph drawing and dimensionality reduction using stochastic optimization
Researchers have developed a new method that bridges graph drawing and dimensionality reduction techniques by adapting stochastic gradient descent for vector data embedding. This approach, implemented as a scikit-learn …
-
Researchers develop SGD algorithms for learning operators with operator-valued kernels
Researchers have developed a new method for estimating regression operators in statistical inverse problems. The approach utilizes regularized stochastic gradient descent (SGD) with operator-valued kernels, offering dim…
-
Researchers explore complex SGD and directional bias in kernel Hilbert spaces
Researchers have introduced a novel variant of Stochastic Gradient Descent (SGD) designed for complex-valued neural networks. This new method, termed complex SGD, offers convergence guarantees even without analyticity c…
-
New research refines SGD generalization bounds and covariance estimation
Researchers have developed new methods to analyze the generalization capabilities of Stochastic Gradient Descent (SGD) in machine learning. One paper introduces predictable history-adaptive virtual perturbations, allowi…