A new paper investigates the quality of annotations for Aspect-Based Sentiment Analysis (ABSA) in German, comparing experts, students, crowdworkers, and large language models (LLMs). The study re-annotated an existing dataset to establish a ground truth and evaluated annotation quality using Inter-Annotator Agreement (IAA). The research also assessed the impact of these different annotation sources on downstream model performance for ABSA subtasks, utilizing BERT, T5, and LLaMA-based models. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Provides insights into the trade-offs between annotation reliability and efficiency for dataset construction in under-resourced NLP scenarios.
RANK_REASON The cluster contains an academic paper detailing a comparative study on annotation quality for NLP tasks.