A new paper explores the expressiveness of different Recurrent Graph Neural Network (RGNN) models, specifically focusing on converging, output-converging, and halting RGNNs. The research establishes that on undirected graphs, converging RGNNs are as expressive as graded-bisimulation-invariant halting RGNNs, while output-converging RGNNs are at least as expressive. The study introduces a "traffic-light" protocol to address the desynchronization challenge when simulating halting RGNNs with converging ones, answering an open question in the field. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Clarifies theoretical expressiveness limits of RGNN variants, potentially guiding future research in graph-based AI.
RANK_REASON Academic paper analyzing theoretical properties of RGNN models.