Max Tegmark of the Future of Life Institute praised Anthropic and OpenAI for refusing to develop AI for the Department of War, specifically concerning autonomous weapons and domestic surveillance. He emphasized that AI should always have meaningful human control, especially in lethal applications, and that current AI is too unpredictable for such high-stakes uses. Tegmark urged lawmakers to codify these principles into law, arguing that fully autonomous weapons pose risks to national security through escalation and proliferation. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON Statement from a prominent AI safety advocate expressing opinions on AI policy and safety concerns.