PulseAugur
LIVE 06:29:08
research · [1 source] ·
0
research

Existential Risk Observatory seeks collaborators for AI threat model alignment research

The Existential Risk Observatory, in collaboration with MIT FutureTech and FLI, is launching a project to establish researcher consensus on AI existential threat models. The initiative aims to clarify disagreements among experts regarding how advanced AI could lead to human extinction, by building a taxonomy of threat models and working towards consensus on key assumptions. This effort seeks to deconfuse subfields like AI alignment, governance, and offense/defense balance by explicitly considering different threat scenarios. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Aims to clarify AI existential threat models, potentially guiding future alignment and governance research by establishing common ground among researchers.

RANK_REASON This is a research initiative focused on AI safety and existential risk, involving academic collaboration and a call for research contributions.

Read on LessWrong (AI tag) →

COVERAGE [1]

  1. LessWrong (AI tag) TIER_1 · otto.barten ·

    Open internship position + call for collaborations on threat model-dependent alignment, governance, and offense/defense balance

    <h1><span>Open internship position + call for collaborations on threat model-dependent alignment, governance, and offense/defense balance</span></h1><p><span>At the Existential Risk Observatory, we're currently carrying out a project called </span><i><span>Solving the Right Probl…