OpenAI has updated its Preparedness Framework to better measure and mitigate severe harm risks from advanced AI capabilities. The revised framework includes clearer criteria for prioritizing high-risk capabilities, sharper categorization of these capabilities into 'Tracked' and 'Research' areas, and defined thresholds for 'High' and 'Critical' capability levels. This update aims to make their safety evaluations more rigorous, actionable, and transparent as AI technology advances. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON OpenAI's update to its Preparedness Framework details its approach to AI safety and risk mitigation, which falls under research and safety practices rather than a new model release or significant policy change.