PulseAugur
LIVE 07:18:42
research · [2 sources] ·
0
research

SG-UniBuc-NLP uses RoBERTa with chunking for political evasion detection

Researchers from SG-UniBuc-NLP developed a system for SemEval-2026 Task 6, which aims to detect political question evasions in English interviews. Their approach utilizes a Multi-Head RoBERTa model combined with a chunking strategy to handle responses exceeding the standard 512-token limit of Transformer encoders. The system achieved a Macro-F1 score of 0.80 on the coarse-grained clarity subtask and 0.51 on the fine-grained evasion strategy subtask, securing 11th place in both. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Presents a novel approach to handling long contexts in NLP tasks, potentially improving performance on similar classification challenges.

RANK_REASON Academic paper detailing a system for a specific NLP task at a competition.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Gabriel Stefan, Sergiu Nisioi ·

    SG-UniBuc-NLP at SemEval-2026 Task 6: Multi-Head RoBERTa with Chunking for Long-Context Evasion Detection

    arXiv:2604.26375v1 Announce Type: new Abstract: We describe our system for SemEval-2026 Task 6 (CLARITY: Unmasking Political Question Evasions), which classifies English political interview responses by coarse-grained clarity (3-way) and fine-grained evasion strategy (9-way). Sin…

  2. arXiv cs.CL TIER_1 · Sergiu Nisioi ·

    SG-UniBuc-NLP at SemEval-2026 Task 6: Multi-Head RoBERTa with Chunking for Long-Context Evasion Detection

    We describe our system for SemEval-2026 Task 6 (CLARITY: Unmasking Political Question Evasions), which classifies English political interview responses by coarse-grained clarity (3-way) and fine-grained evasion strategy (9-way). Since responses frequently exceed the 512-token lim…