Researchers have characterized scale-invariant upper bounds for self-normalized martingales in linear regression, finding that such bounds are generally only possible in one dimension. For dimensions greater than one, nontrivial scale-invariant bounds are shown to be impossible without additional assumptions. The study also resolves an open question regarding doubly-uniform regret in online linear regression, providing an algorithm with $O(\log T)$ regret in $d=1$ and demonstrating its impossibility for $d>1$. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Advances theoretical understanding of regret bounds in online learning, potentially impacting future algorithm design.
RANK_REASON Academic paper detailing theoretical advancements in statistical learning.