PulseAugur
LIVE 14:01:16
research · [3 sources] ·
0
research

OpenAI launches safety fellowship and strengthens child protection measures

OpenAI has announced a new Safety Fellowship program to support external researchers in advancing AI safety and alignment. The fellowship, running from September 2026 to February 2027, will provide stipends, compute resources, and mentorship for participants to produce research outputs like papers or datasets. Concurrently, OpenAI has detailed its ongoing safety practices, emphasizing empirical testing, alignment research, abuse monitoring, and a systematic approach throughout the model lifecycle. The company also reaffirmed its commitment to child safety by adopting Safety by Design principles alongside other major tech firms, pledging to develop, deploy, and maintain AI models with robust child protection measures and to combat AI-generated child sexual abuse material. AI

Summary written by gemini-2.5-flash-lite from 3 sources. How we write summaries →

RANK_REASON OpenAI announced a new fellowship for AI safety research and detailed its existing safety practices, including commitments to child safety, which falls under research and policy initiatives.

Read on OpenAI News →

OpenAI launches safety fellowship and strengthens child protection measures

COVERAGE [3]

  1. OpenAI News TIER_1 ·

    Announcing the OpenAI Safety Fellowship

    A pilot program to support independent safety and alignment research and develop the next generation of talent

  2. OpenAI News TIER_1 ·

    OpenAI safety practices

    Artificial general intelligence has the potential to benefit nearly every aspect of our lives—so it must be developed and deployed responsibly.

  3. OpenAI News TIER_1 ·

    OpenAI’s commitment to child safety: adopting safety by design principles