PulseAugur
LIVE 06:09:41
commentary · [3 sources] ·
0
commentary

Honest Ethics & AI – Part 1: The origins of morality

This multi-part essay sequence explores the origins of morality and its relation to artificial intelligence. The author argues that current AI systems, particularly transformer-based LLMs, are not equipped for moral decision-making due to their inherent lack of moral judgment. The series aims to provide a pragmatic discussion on ethics and AI, distinguishing between ethical reasoning and morality, and suggesting a new direction for AI alignment and safety efforts. AI

Summary written by None from 3 sources. How we write summaries →

IMPACT Challenges the notion of value alignment for AI, suggesting a shift towards understanding AI's inherent lack of moral judgment.

RANK_REASON This is an opinion piece discussing AI ethics and morality, not a research paper or model release.

Read on LessWrong (AI tag) →

COVERAGE [3]

  1. arXiv cs.AI TIER_1 · Benjamin Minhao Chen, Xinyu Xie ·

    The Alignment Target Problem: Divergent Moral Judgments of Humans, AI Systems, and Their Designers

    arXiv:2604.24155v1 Announce Type: cross Abstract: The quest to align machine behavior with human values raises fundamental questions about the moral frameworks that should govern AI decision-making. Much alignment research assumes that the appropriate benchmark is how humans them…

  2. LessWrong (AI tag) TIER_1 · Jesper L. ·

    Honest Ethics & AI – Part 1: The origins of morality

    <p><i><b><span>LW AI disclaimer: </span></b></i><i><span>No text in this essay has been written or edited by an AI. None of the key ideas here have been generated or co-generated with an AI. </span></i></p><p><i><b><span>On scope and sources:</span></b></i><i><span> This essay is…

  3. The Gradient TIER_1 · Peli Grietzer ·

    After Orthogonality: Virtue-Ethical Agency and AI Alignment

    <!--kg-card-begin: markdown--><h2 id="preface">Preface</h2> <p>This essay argues that rational people don&#x2019;t have goals, and that rational AIs shouldn&#x2019;t have goals. Human actions are rational not because we direct them at some final &#x2018;goals,&#x2019; but because…