PulseAugur
LIVE 23:59:49
tool · [1 source] ·
8
tool

Apollo Research expands to SF, focuses on AI misalignment and monitoring

Apollo Research has expanded its operations by opening an office in San Francisco and is actively hiring for technical positions in both San Francisco and London. The company is focusing its research efforts on understanding the potential for future AI models to develop misaligned preferences and the effectiveness of training methods designed to prevent this. Additionally, Apollo is developing a product called Watcher for real-time monitoring of coding agents and is dedicating resources to AI governance, particularly concerning automated AI research and the risks of recursive self-improvement leading to loss of control. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Apollo Research is advancing AI safety by developing monitoring tools and researching AI misalignment, crucial for responsible AI development and governance.

RANK_REASON The cluster details research efforts and product development focused on AI safety and governance, including the publication of a monitoring agenda and the development of an agent monitoring product. [lever_c_demoted from research: ic=1 ai=1.0]

Read on LessWrong (AI tag) →

COVERAGE [1]

  1. LessWrong (AI tag) TIER_1 · Marius Hobbhahn ·

    Apollo Update May 2026

    <ol><li value="1"><span>We now have an </span><b><span>SF office</span></b><span>. We're </span><a href="https://www.apolloresearch.ai/careers/" rel="noreferrer"><span>hiring</span></a><span> for all technical roles in SF and London!</span></li><li value="2"><span>The </span><b><…