Researchers have investigated the reliability of using digital personas, powered by Large Language Models, to substitute for human respondents in surveys. Their study, utilizing the LISS panel and various persona architectures and LLMs, found that these personas can effectively approximate human response distributions, particularly for questions related to stable attributes and values. However, the personas showed limitations in individual prediction and failed to capture complex respondent structures. The effectiveness of digital personas was found to be more dependent on the inherent structure of human responses than on the specific LLM used, performing best on less variable and common patterns, and worst on subjective or rare responses. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Provides guidance on the appropriate use of LLM-generated personas in survey research, highlighting areas where human validation remains essential.
RANK_REASON Academic paper detailing a study on the capabilities and limitations of LLM-powered digital personas for survey research. [lever_c_demoted from research: ic=1 ai=1.0]