A new paper examines the evaluation of explainable AI (XAI) methods, specifically Shapley value variants, in high-stakes scenarios like fraud detection. Researchers found that standard quantitative metrics for XAI do not align with human understanding or decision utility. While the tested XAI formulations did not improve analyst performance, they did increase decision confidence, raising concerns about automation bias. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Current XAI evaluation metrics may not reflect real-world human utility, potentially leading to overconfidence and automation bias in critical decision-making.
RANK_REASON Academic paper on XAI evaluation methods.