OpenAI has established a new Preparedness team, led by Aleksander Madry, to focus on the safety risks associated with highly capable AI systems, including potential catastrophic misuse. This team will integrate capability assessment, evaluations, and red teaming for future frontier models and AGI. OpenAI is also launching an AI Preparedness Challenge to identify novel catastrophic misuse risks, offering API credits to top submissions and seeking talent from participants. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON OpenAI announces a new internal team and a public challenge focused on mitigating catastrophic risks from future frontier AI models.