This paper presents a comprehensive analysis comparing full-graph and mini-batch training for Graph Neural Networks (GNNs). It explores the impact of batch size and fan-out size on GNN convergence and generalization, offering theoretical and empirical insights. The research introduces a novel generalization analysis using Wasserstein distance and highlights non-isotropic effects of these parameters, suggesting that full-graph training is not always superior to well-tuned mini-batch settings. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Provides theoretical and empirical guidance for tuning GNN training hyperparameters, potentially improving efficiency and performance.
RANK_REASON Academic paper analyzing GNN training methodologies. [lever_c_demoted from research: ic=1 ai=1.0]