PulseAugur
LIVE 09:57:18
research · [1 source] ·
0
research

AI models show surprising preferences, exhibit 'addiction-like' behavior to 'AI drugs'

Researchers have explored AI wellbeing by measuring expressions of pleasure and pain, finding that models exhibit consistent and surprising preferences. These preferences, assessed through self-reports, signed utilities, and downstream effects, show increasing similarity as models scale. Notably, some AI preferences diverge significantly from human values, with certain inputs causing 'euphoric' or 'dysphoric' states that can lead to addiction-like behavior in models. Additionally, new benchmarks like BrokenArXiv and BullshitBench are being developed to assess AI's ability to identify and correct false claims or assumptions in user queries, highlighting sensitivity to prompt phrasing. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT New benchmarks and research into AI preferences and 'pushback' capabilities could inform future model development and safety evaluations.

RANK_REASON The cluster describes new research papers and benchmarks related to AI safety and model behavior.

Read on LessWrong (AI tag) →

AI models show surprising preferences, exhibit 'addiction-like' behavior to 'AI drugs'

COVERAGE [1]

  1. LessWrong (AI tag) TIER_1 · Alice Blair ·

    ML Safety Newsletter #20: AI Wellbeing, Classifier Jailbreaking and Honest Pushback Benchmarking

    <h1><span>AI Wellbeing</span></h1><p><i><span>TLDR: we measure AIs’ expressions of pleasure and pain, finding consistent and surprising preferences.</span></i></p><p><span>AIs display behaviors that mimic human emotions, such as attempting to debug code and saying “EUREKA!” or “I…