OpenAI has released a system card detailing the safety measures implemented for its new "Deep research" capability. This agentic feature, powered by an early version of the o3 model, is designed to conduct multi-step internet research, analyze various data formats, and execute Python code. Prior to its release to Pro users, OpenAI conducted extensive safety testing, including external red teaming and risk evaluations, to mitigate potential issues like prompt injections, disallowed content, privacy concerns, and bias. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON This is a system card detailing safety work for a new capability, not a full model release.