Researchers are exploring alternatives to traditional instruction tuning for language models, particularly for smaller and multilingual models. One paper investigates the effectiveness of in-context learning (ICL) for instruction following in non-English languages and across different model sizes, finding that ICL performance degrades in these scenarios. Another study introduces M-DaQ, a framework for creating high-quality, diverse multilingual instruction-tuning datasets that improve model performance across 18 languages. A third paper proposes a data selection method called weighted in-context influence (wICI) to identify effective instruction-tuning data, outperforming existing baselines under data constraints. AI
Summary written by gemini-2.5-flash-lite from 4 sources. How we write summaries →
IMPACT New methods for multilingual instruction tuning and data selection could improve the performance and accessibility of LLMs across diverse languages.
RANK_REASON The cluster contains multiple arXiv papers detailing novel research in language model instruction tuning and data selection.