PulseAugur
LIVE 09:55:31
research · [2 sources] ·
0
research

Measuring the Sensitivity of Classification Models with the Error Sensitivity Profile

Researchers have introduced the Error Sensitivity Profile (ESP), a new metric designed to quantify how sensitive classification model performance is to errors within its training data. This metric can help prioritize data-cleaning efforts by identifying features most likely to impact model accuracy. An accompanying tool called \dirty has been developed to facilitate the computation of ESP, with experiments showing that performance degradation isn't always predictable from simple correlations. AI

Summary written by None from 2 sources. How we write summaries →

IMPACT Provides a new method for identifying critical data errors, potentially improving model robustness and reducing data cleaning costs.

RANK_REASON This is a research paper introducing a new metric and tool for analyzing classification models.

Read on arXiv cs.AI →

COVERAGE [2]

  1. arXiv cs.LG TIER_1 · Andrea Maurino ·

    Measuring the Sensitivity of Classification Models with the Error Sensitivity Profile

    arXiv:2604.25765v1 Announce Type: new Abstract: The quality of training data is critical to the performance of machine learning models. In this paper, the Error Sensitivity Profile (ESP) is proposed. It quantifies the sensitivity of model performance to errors in a single feature…

  2. arXiv cs.AI TIER_1 · Andrea Maurino ·

    Measuring the Sensitivity of Classification Models with the Error Sensitivity Profile

    The quality of training data is critical to the performance of machine learning models. In this paper, the Error Sensitivity Profile (ESP) is proposed. It quantifies the sensitivity of model performance to errors in a single feature or in multiple features. By leveraging ESP, dat…