Researchers have analyzed the high-dimensional risk of $\ell_2$-Boosting in the context of $\ell_1$ implicit bias, identifying a logarithmic rate of excess variance decay under a pure-noise model. This phenomenon, where benign overfitting fails at a linear rate, is attributed to greedy selection localizing noise into sparse active sets. The study also found that for spiked-isotropic designs, the risk converges to zero at a slower logarithmic rate compared to $\ell_2$ geometries. To address this, a tuning-free early stopping rule was proposed, which can recover the Lasso basic inequality and achieve minimax-optimal empirical prediction rates for $\ell_1$-bounded signals. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Provides theoretical insights into the behavior of boosting algorithms and their implications for signal-noise decomposition in high-dimensional settings.
RANK_REASON This is a theoretical computer science paper published on arXiv. [lever_c_demoted from research: ic=1 ai=1.0]