A new study investigated how personality traits influence gender bias in Large Language Models (LLMs) when they adopt specific personas. Researchers generated over 23,000 stories in English and Hindi, varying persona gender, occupation, and personality. The findings indicate that 'Dark Triad' personality traits are linked to more gender-stereotypical narratives compared to 'HEXACO' traits, with variations observed across different LLMs and languages. This suggests that persona-conditioned LLMs could perpetuate uneven representational harms and reinforce gender stereotypes in various applications. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Persona-conditioned LLMs may introduce uneven representational harms, reinforcing gender stereotypes in generated content.
RANK_REASON Academic paper investigating bias in LLMs.