Shap
PulseAugur coverage of Shap — every cluster mentioning Shap across labs, papers, and developer communities, ranked by signal.
-
New algorithm computes exact Shapley values for product-kernel methods
Researchers have developed PKeX-Shapley, a novel algorithm designed to compute exact Shapley values for product-kernel methods in machine learning. This new method leverages the multiplicative structure of product kerne…
-
CatNet paper introduces SHAP for feature importance in LSTM FDR control
Researchers have introduced CatNet, a novel algorithm designed to control the False Discovery Rate (FDR) and identify significant features within Long Short-Term Memory (LSTM) networks. This method utilizes the derivati…
-
Neural-Actuarial Longevity Forecasting: Anchoring LSTMs for Explainable Risk Management
Researchers have developed a new neural-actuarial framework called Hybrid-Lift to improve longevity forecasting. This approach combines Hierarchical LSTM networks with a Mean-Bias Correction anchoring mechanism to addre…
-
ML models show inconsistent feature importance in electrospinning research
A new research paper explores the consistency of feature importance across various machine learning models in the context of electrospinning. The study evaluated 21 different ML models using SHAP values to assess the re…
-
GRALIS framework unifies linear attribution methods for deep neural networks
Researchers have introduced GRALIS, a novel mathematical framework designed to unify various linear attribution methods used in Explainable AI (XAI). This framework establishes a canonical representation for attribution…
-
AI decodes driver behavior and auditory signals using advanced machine learning
Researchers have developed a new framework for classifying driver behavior using a combination of physiological signals like EEG, EMG, and GSR. The system employs SHAP-based feature selection to identify the most predic…
-
New neural network architectures offer aligned explanations for AI predictions
Researchers have introduced Pointwise-interpretable Networks (PiNets), a novel architecture designed to ensure that explanations for neural network predictions genuinely reflect the model's reasoning process. These netw…
-
New phi-table method enhances global SHAP explanations for tabular models
Researchers have introduced the $\phi$-table, a new method for statistically explaining global SHAP values in tabular black-box regression models. This approach moves beyond simple feature importance rankings to provide…
-
AI research in 2026: Spiking networks, web search agents, and Anthropic's dangerous Claude Mythos
New research indicates that Binary Spiking Neural Networks can serve as reliable causal models, outperforming existing methods like SHAP in explaining AI decisions. Separately, a novel bi-level multi-agent system called…
-
GRASP framework enhances medical prediction with robust feature selection
Researchers have developed GRASP, a new framework for feature selection in medical prediction tasks. GRASP combines Shapley value attributions with group $L_{21}$ regularization to identify compact and interpretable fea…
-
Researchers use causal analysis to explain Binary Spiking Neural Networks
Researchers have developed a novel causal analysis framework for Binary Spiking Neural Networks (BSNNs), treating their spiking activity as a binary causal model. This approach allows for logic-based explanations of net…
-
AI framework cuts brain microstructure scan time by half
Researchers have developed a new, faster protocol for quantifying human gray matter microstructure using diffusion MRI. By employing an Explainable AI (XAI) framework, specifically XGBoost and SHAP, they identified an o…
-
Explainable Load Forecasting with Covariate-Informed Time Series Foundation Models
Researchers have developed a method to make Time Series Foundation Models (TSFMs) more transparent for critical infrastructure applications like power grids. Their approach uses Shapley Additive Explanations (SHAP) to e…
-
Machine learning corrects indentation size effect in steels with small datasets
Researchers have developed a data-efficient method for correcting the indentation size effect (ISE) in steels using machine learning and physics-guided augmentation. By augmenting a dataset of approximately 700 experime…
-
AI frameworks improve knee osteoarthritis grading with new learning and explainability methods
Two new research papers propose advanced AI methods for grading knee osteoarthritis from X-ray images. One paper, H-SemiS, utilizes a hierarchical fusion of semi-supervised and self-supervised learning to address class …
-
Interpretable AI framework enhances U.S. grid load forecasting under extreme weather
Researchers have developed a new interpretable deep learning framework for electricity load forecasting, designed to enhance U.S. grid resilience during extreme weather events. The system combines Convolutional Neural N…
-
Researchers use SHAP and RL to improve robot generalization and affordance reasoning
Researchers have developed a framework using SHapley Additive exPlanations (SHAP) to analyze and improve the generalizability of reinforcement learning (RL) algorithms in robotics. This approach quantifies the impact of…
-
AI research reviews explainable AI techniques for food industry applications
A new review paper categorizes explainable AI (XAI) techniques for use in Food Engineering, aiming to increase transparency and reliability in AI models. The paper highlights the underutilization of XAI in this field, d…
-
Agentic AI platforms autonomously train models and induce rules for protein interactions
Researchers have developed agentic AI platforms capable of autonomously training predictive machine learning models and inducing explicit rules for protein-protein interactions (PPIs). One platform focuses on data colle…
-
New framework enhances AI explainability for spectral data analysis
Researchers have developed the Spectral Model eXplainer (SMX), a new framework designed to improve the explainability of machine learning models used in chemometrics and spectroscopy. Unlike existing methods that focus …