PulseAugur
LIVE 12:22:59
tool · [1 source] ·
0
tool

LLM post-training reduces human-like behavior, study finds

A new study introduces Psych-201, a dataset designed to measure how well large language models mimic human behavior. The research found that post-training, the process used to make LLMs more helpful, consistently makes them less aligned with human actions. This misalignment increases with newer model generations, even as their base capabilities improve. Additionally, techniques like persona-induction, which aim to make models more human-like by using participant-specific data, do not enhance individual predictions. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Suggests current LLM fine-tuning processes may hinder their use as accurate models of human behavior.

RANK_REASON The cluster contains an academic paper detailing a new dataset and findings about LLM behavior. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.CL →

COVERAGE [1]

  1. arXiv cs.CL TIER_1 (ET) · Eric Schulz ·

    Post-training makes large language models less human-like

    Large language models (LLMs) are increasingly used as surrogates for human participants, but it remains unclear which models best capture human behavior and why. To address this, we introduce Psych-201, a novel dataset that enables us to measure behavioral alignment at scale. We …