PulseAugur
LIVE 12:23:07
research · [2 sources] ·
0
research

New framework tackles causal fairness for continuous attributes in AI

This paper introduces a novel framework for addressing fairness in machine learning models, particularly for continuous protected attributes like age or gender. The proposed method formalizes fairness criteria using path-specific partial derivatives, extending existing causal formulations. It also presents a tuning algorithm designed to construct fair predictors or manage trade-offs between different fairness metrics when perfect fairness is unattainable. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Introduces a new theoretical framework and algorithm for achieving causal fairness in machine learning models, particularly for continuous protected attributes.

RANK_REASON This is a research paper published on arXiv detailing a new theoretical framework and algorithm for causal fairness in machine learning.

Read on arXiv stat.ML →

COVERAGE [2]

  1. arXiv cs.LG TIER_1 · Filip Edstr\"om, Guilherme W. F. Barros, Tetiana Gorbach, Xavier de Luna ·

    Tuning Derivatives for Causal Fairness in Machine Learning

    arXiv:2605.05882v1 Announce Type: cross Abstract: Artificial-intelligence systems are becoming ubiquitous in society, yet their predictions typically inherit biases with respect to protected attributes such as race, gender, or age. Classical fairness notions, most notably Statist…

  2. arXiv stat.ML TIER_1 · Xavier de Luna ·

    Tuning Derivatives for Causal Fairness in Machine Learning

    Artificial-intelligence systems are becoming ubiquitous in society, yet their predictions typically inherit biases with respect to protected attributes such as race, gender, or age. Classical fairness notions, most notably Statistical Parity (SP), demand that predictions be indep…