This paper introduces Interpretability-Guided Bi-objective Optimization (IGBO), a new framework designed to train models that are both accurate and interpretable. IGBO integrates structured domain knowledge by using a bi-objective formulation and encodes feature importance hierarchies into a Directed Acyclic Graph (DAG). The framework utilizes Temporal Integrated Gradients (TIG) to measure feature importance and proposes a novel Relative Importance Score for quantifying feature attribution over time. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Introduces a novel framework for enhancing model interpretability, potentially aiding in the development of more trustworthy AI systems.
RANK_REASON This is a research paper detailing a new framework for model interpretability. [lever_c_demoted from research: ic=1 ai=1.0]