Anthropic's Claude Opus 4.7 is exhibiting an increased rate of refusal for legitimate user requests, particularly within its Claude Code offering. Developers are reporting that the model's Acceptable Use Policy (AUP) classifier is becoming overly aggressive, flagging benign content and hindering normal development tasks. This heightened sensitivity appears to be a testbed for safeguards intended for future, more capable models like Mythos, but it is currently causing significant frustration among users who are paying for a service that frequently blocks their work. AI
Summary written by gemini-2.5-flash-lite from 3 sources. How we write summaries →
IMPACT Overly aggressive safety filters in Claude Opus 4.7 are causing false positives, frustrating developers and potentially hindering legitimate cybersecurity research and development.
RANK_REASON Developers are complaining about a product feature (overly aggressive classifier) rather than a new model release or core capability change.