Researchers have established finite-time last-iterate guarantees for stochastic gradient descent in co-coercive games, even with noisy feedback. This work extends previous findings by relaxing the assumption of vanishing noise to a more general model where noise can scale with the iterates. The paper presents a new theoretical bound of $O(\log(t)/t^{1/3})$ for such games, marking the first such guarantee under non-vanishing noise conditions. Additionally, the study demonstrates the convergence of iterates to Nash equilibria and provides time-average convergence guarantees. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON Academic paper published on arXiv detailing theoretical advancements in game theory and machine learning.