A new paper critiques the concept of "ground truth" in data annotation for machine learning, arguing that human disagreement is often treated as noise rather than a valuable signal. The research highlights how factors like positional legibility, reliance on model-mediated annotations, and geographic hegemony contribute to a "consensus trap." The authors propose a shift from seeking a single correct answer to mapping the diversity of human experience for more culturally competent AI models. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Challenges the notion of "ground truth" in AI training data, potentially impacting how future models are evaluated and developed for cultural competence.
RANK_REASON The cluster contains an academic paper published on arXiv.