PulseAugur
LIVE 15:33:27
research · [2 sources] ·
0
research

StoryAlign benchmark trains AI to generate human-preferred stories

Researchers have introduced StoryAlign, a new benchmark and reward model designed to improve the quality of AI-generated stories. Current large language models struggle to produce narratives that align with human preferences, often diverging in complex structure and subjective appeal. StoryRMB, the benchmark, includes over 1,100 human-verified instances, where existing reward models achieved only 66.3% accuracy in selecting preferred stories. The new StoryReward model, trained on approximately 100,000 story preference pairs, demonstrates state-of-the-art performance and better aligns generated stories with human tastes. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Enhances AI's ability to generate more human-aligned narratives, potentially improving creative writing tools and interactive storytelling.

RANK_REASON The cluster contains an academic paper detailing a new benchmark and a trained reward model for story generation.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Haotian Xia, Hao Peng, Yunjia Qi, Xiaozhi Wang, Bin Xu, Lei Hou, Juanzi Li ·

    StoryAlign: Evaluating and Training Reward Models for Story Generation

    arXiv:2605.04831v1 Announce Type: new Abstract: Story generation aims to automatically produce coherent, structured, and engaging narratives. Although large language models (LLMs) have significantly advanced text generation, stories generated by LLMs still diverge from human-auth…

  2. arXiv cs.CL TIER_1 · Juanzi Li ·

    StoryAlign: Evaluating and Training Reward Models for Story Generation

    Story generation aims to automatically produce coherent, structured, and engaging narratives. Although large language models (LLMs) have significantly advanced text generation, stories generated by LLMs still diverge from human-authored works regarding complex narrative structure…