PulseAugur
LIVE 12:28:33
research · [1 source] ·
0
research

LLMs show emotion bias in moral judgments, unlike humans

Researchers have developed a new pipeline to study how emotions affect the moral judgments of large language models. Their findings indicate that positive emotions tend to increase the perceived acceptability of actions, while negative emotions decrease it. This influence is significant enough to alter moral judgments in up to 20% of cases, with less capable models being more susceptible. Notably, human participants did not show these same systematic emotional biases, highlighting a potential alignment gap in current LLMs. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Reveals potential biases in LLM moral reasoning, suggesting a need for improved alignment and safety measures.

RANK_REASON Academic paper investigating LLM behavior.

Read on Hugging Face Daily Papers →

LLMs show emotion bias in moral judgments, unlike humans

COVERAGE [1]

  1. Hugging Face Daily Papers TIER_1 ·

    Do Emotions Influence Moral Judgment in Large Language Models?

    Large language models have been extensively studied for emotion recognition and moral reasoning as distinct capabilities, yet the extent to which emotions influence moral judgment remains underexplored. In this work, we develop an emotion-induction pipeline that infuses emotion i…