GradCAM
PulseAugur coverage of GradCAM — every cluster mentioning GradCAM across labs, papers, and developer communities, ranked by signal.
-
GRALIS framework unifies linear attribution methods for deep neural networks
Researchers have introduced GRALIS, a novel mathematical framework designed to unify various linear attribution methods used in Explainable AI (XAI). This framework establishes a canonical representation for attribution…
-
AI models use inpainting to identify individual animals by skin patterns
Researchers have explored deep learning methods for identifying individual animals based on their skin patterns, a task crucial for biodiversity monitoring. The study focuses on enhancing machine learning models' respon…
-
Deep learning models show promise in predicting cryptocurrency regimes from chart data
Researchers have conducted a systematic study on using deep learning for cryptocurrency regime prediction based on visual chart representations. They compared various image encoding methods, chart components, and neural…
-
InfiltrNet combines CNN and Transformer for brain tumor infiltration risk prediction
Researchers have developed InfiltrNet, a novel dual-branch architecture designed to predict brain tumor infiltration risk. This system combines a CNN encoder with a Swin Transformer encoder, utilizing cross-attention fu…
-
AI researchers compare explainability methods for jet tagging in particle physics
Researchers have developed and compared three explainable AI (XAI) methods—GNNExplainer, GNNShap, and GradCAM—to understand the predictions of graph neural networks used in jet tagging at the Large Hadron Collider. The …
-
Towards interpretable AI with quantum annealing feature selection
Researchers have developed a novel method for interpreting Convolutional Neural Networks (CNNs) in image classification tasks by leveraging quantum annealing for feature selection. This approach identifies the most infl…