Researchers have developed and compared three explainable AI (XAI) methods—GNNExplainer, GNNShap, and GradCAM—to understand the predictions of graph neural networks used in jet tagging at the Large Hadron Collider. The study adapted these XAI techniques to the Lund plane representation, which maps parton splittings to graph nodes. By introducing a physics-informed evaluation framework, the research quantifies how explanation quality varies across different energy regimes and assesses the correlation between AI-assigned importance and established jet substructure observables. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Provides methods to interpret complex AI models in high-energy physics, potentially improving understanding of learned features.
RANK_REASON Academic paper presenting a comparative study of explainability methods for graph neural networks in a specific physics application.