This paper introduces a novel framework for addressing fairness in machine learning models, particularly for continuous protected attributes like age or gender. The proposed method formalizes fairness criteria using path-specific partial derivatives, extending existing causal formulations. It also presents a tuning algorithm designed to construct fair predictors or manage trade-offs between different fairness metrics when perfect fairness is unattainable. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Introduces a new theoretical framework and algorithm for achieving causal fairness in machine learning models, particularly for continuous protected attributes.
RANK_REASON This is a research paper published on arXiv detailing a new theoretical framework and algorithm for causal fairness in machine learning.