This paper introduces a formal framework to explain why individuals or AI systems can reach different conclusions from the same set of observations. It proposes two levels of non-identifiability: divergence in conclusions due to differing inference settings, and divergence in the learned world models themselves. The authors define an 'inference profile' to model these differences and connect the framework to concepts in deep representation learning, using AI regulation debates as a case study. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Provides a theoretical lens to understand and potentially mitigate disagreements in AI decision-making and human-AI interaction.
RANK_REASON Academic paper published on arXiv detailing a new theoretical framework for understanding inference divergence. [lever_c_demoted from research: ic=1 ai=1.0]