OpenAI has announced a new Safety Fellowship program to support external researchers in advancing AI safety and alignment. The fellowship, running from September 2026 to February 2027, will provide stipends, compute resources, and mentorship for participants to produce research outputs like papers or datasets. Concurrently, OpenAI has detailed its ongoing safety practices, emphasizing empirical testing, alignment research, abuse monitoring, and a systematic approach throughout the model lifecycle. The company also reaffirmed its commitment to child safety by adopting Safety by Design principles alongside other major tech firms, pledging to develop, deploy, and maintain AI models with robust child protection measures and to combat AI-generated child sexual abuse material. AI
Summary written by gemini-2.5-flash-lite from 3 sources. How we write summaries →
RANK_REASON OpenAI announced a new fellowship for AI safety research and detailed its existing safety practices, including commitments to child safety, which falls under research and policy initiatives.