This paper introduces stronger forms of asymptotic normality for the maximum likelihood estimator (MLE). It establishes sub-Gaussian tail bounds and convergence of all moments for the normalized estimation error under specific score assumptions. The research also proves an entropic central limit theorem for a smoothed MLE, demonstrating convergence in relative entropy to a Gaussian law, and shows this smoothing can be removed under certain conditions. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT This research advances theoretical understanding in statistical estimation, potentially impacting the development of more robust AI models that rely on maximum likelihood estimation.
RANK_REASON The cluster contains an academic paper published on arXiv detailing statistical research.