Researchers have evaluated state-of-the-art aspect-based sentiment analysis (ABSA) approaches across seven languages, finding that fine-tuned large language models (LLMs) perform best, especially on complex tasks. The study explored zero-resource, data-only, and full-resource settings using cross-lingual transfer, code-switching, and machine translation. Findings indicate that cross-lingual training on multiple non-target languages is most effective for LLMs, while smaller models benefit more from code-switching, suggesting architecture-specific strategies for multilingual ABSA. The research also introduced two new German datasets to promote further multilingual ABSA research. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Provides new datasets and evaluation strategies for multilingual aspect-based sentiment analysis, potentially improving cross-lingual LLM capabilities.
RANK_REASON This is a research paper evaluating existing models and introducing new datasets for a specific NLP task.