A new study systematically investigates the effectiveness of iterative self-refinement for Large Language Models (LLMs) in document-level literary translation. Researchers found that a robust approach involves document-level machine translation followed by segment-level refinement, which consistently yields strong improvements. Simple, general refinement prompts were more effective than error-specific ones, and gains were primarily observed in fluency, style, and terminology, with less impact on adequacy. The study also suggests that refinement tends to steer outputs towards the refiner's distribution rather than fixing specific errors. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Clarifies mechanisms and limitations of LLM refinement for translation, guiding future development of more effective MT systems.
RANK_REASON Academic paper presenting a systematic study on LLM refinement techniques for translation. [lever_c_demoted from research: ic=1 ai=1.0]