OpenAI has detailed its pre-training mitigations for the DALL·E 2 image generation model, focusing on how the training data was modified to reduce risks. The company filtered out violent and sexual imagery from the dataset to prevent the model from generating such content. Additionally, OpenAI addressed potential biases introduced by data filtering and implemented techniques to mitigate image memorization by removing visually similar images. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON The article details safety mitigations applied during the pre-training of a specific AI model, DALL-E 2, which is a research-oriented disclosure.