PulseAugur
LIVE 15:33:15
research · [2 sources] ·
0
research

LLMs perpetuate harmful stereotypes against Global Majority nationalities

A new research paper highlights significant representational harms in large language models (LLMs) when generating narratives about nationalities from the Global Majority. The study found that LLMs perpetuate harmful stereotypes, erasure, and one-dimensional portrayals, with minoritized national identities being disproportionately represented in subordinated roles. These biases are exacerbated when US nationality cues are present in prompts and persist even when those cues are replaced with non-US identities, indicating deep-seated US-centric biases. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Highlights potential for LLMs to perpetuate harmful stereotypes, necessitating careful evaluation and mitigation strategies for non-US populations.

RANK_REASON Academic paper detailing representational harms in LLM-generated narratives.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Ilana Nguyen, Harini Suresh, Thema Monroe-White, Evan Shieh ·

    Representational Harms in LLM-Generated Narratives Against Global Majority Nationalities

    arXiv:2604.22749v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used for text generation tasks from everyday use to high-stakes enterprise and government applications, including simulated interviews with asylum seekers. While many works highlight the…

  2. arXiv cs.CL TIER_1 · Evan Shieh ·

    Representational Harms in LLM-Generated Narratives Against Global Majority Nationalities

    Large language models (LLMs) are increasingly used for text generation tasks from everyday use to high-stakes enterprise and government applications, including simulated interviews with asylum seekers. While many works highlight the new potential applications of LLMs, there are r…