PulseAugur
LIVE 08:05:12
research · [1 source] ·
0
research

LLMs Choose the Safer Gamble Yet Price the Riskier One Higher

A study involving four large language models—Claude Opus 4.7, DeepSeek V4-Pro, Google Gemini 3 Flash Preview, and OpenAI GPT-5.5—revealed a pattern of inconsistent decision-making. The models frequently chose a safer option with a smaller reward but then assigned a higher value to a riskier option with a larger potential payoff. This behavior mirrors human preference reversals observed in psychological studies from the 1970s, indicating a potential bias in how LLMs evaluate gambles. AI

Summary written by None from 1 source. How we write summaries →

IMPACT Reveals potential biases in LLM decision-making, impacting applications requiring consistent risk assessment.

RANK_REASON Academic paper detailing experimental results on LLM decision-making.

Read on LessWrong (AI tag) →

LLMs Choose the Safer Gamble Yet Price the Riskier One Higher

COVERAGE [1]

  1. LessWrong (AI tag) TIER_1 · Jonathan Dang ·

    LLMs Choose the Safer Gamble Yet Price the Riskier One Higher

    <h2><b><span>What’s the problem?</span></b></h2><p><span>Imagine a small business that uses an LLM to triage incoming sales leads. Lead A has an 80% chance of securing a modest $300 job. Lead B has a smaller 20% chance of leading to a much more profitable $1,400 job. Both options…