PulseAugur
LIVE 11:03:31
tool · [1 source] ·
0
tool

LLM personas approximate human survey data for stable attributes

Researchers have investigated the reliability of using digital personas, powered by Large Language Models, to substitute for human respondents in surveys. Their study, utilizing the LISS panel and various persona architectures and LLMs, found that these personas can effectively approximate human response distributions, particularly for questions related to stable attributes and values. However, the personas showed limitations in individual prediction and failed to capture complex respondent structures. The effectiveness of digital personas was found to be more dependent on the inherent structure of human responses than on the specific LLM used, performing best on less variable and common patterns, and worst on subjective or rare responses. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Provides guidance on the appropriate use of LLM-generated personas in survey research, highlighting areas where human validation remains essential.

RANK_REASON Academic paper detailing a study on the capabilities and limitations of LLM-powered digital personas for survey research. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv stat.ML →

COVERAGE [1]

  1. arXiv stat.ML TIER_1 · Jairo Diaz-Rodriguez ·

    When Can Digital Personas Reliably Approximate Human Survey Findings?

    Digital personas powered by Large Language Models (LLMs) are increasingly proposed as substitutes for human survey respondents, yet it remains unclear when they can reliably approximate human survey findings. We answer this question using the LISS panel, constructing personas fro…