Researchers have developed a new framework to model individual perspectives by analyzing annotator-specific explanations alongside predictions. This approach uses a 'User Passport' mechanism to incorporate annotator identity and demographic data. Two explainer architectures were tested: a post-hoc prompt-based explainer and a prefixed bridge explainer, both designed to generate explanations aligned with individual annotator viewpoints. The study found that modeling explanations significantly improved predictive performance and offered richer representations of disagreement. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Introduces a novel method for improving model interpretability and performance by incorporating annotator-specific rationales.
RANK_REASON This is a research paper published on arXiv detailing a new framework for modeling explanations in natural language inference tasks.