Researchers from SG-UniBuc-NLP developed a system for SemEval-2026 Task 6, which aims to detect political question evasions in English interviews. Their approach utilizes a Multi-Head RoBERTa model combined with a chunking strategy to handle responses exceeding the standard 512-token limit of Transformer encoders. The system achieved a Macro-F1 score of 0.80 on the coarse-grained clarity subtask and 0.51 on the fine-grained evasion strategy subtask, securing 11th place in both. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Presents a novel approach to handling long contexts in NLP tasks, potentially improving performance on similar classification challenges.
RANK_REASON Academic paper detailing a system for a specific NLP task at a competition.