In May 2025, the law firm Latham & Watkins filed a court declaration in a case involving Anthropic that contained fabricated legal citations generated by Anthropic's Claude AI. The errors, which included incorrect authors and titles for existing sources, were not caught by the legal team and were only identified by opposing counsel. This incident has prompted a federal court to mandate explicit disclosure of AI usage and require human verification for all future filings, highlighting the risks of AI-generated content in legal practice. AI
Summary written by gemini-2.5-flash-lite from 3 sources. How we write summaries →
IMPACT Highlights the critical need for human oversight and verification of AI-generated content in professional legal contexts.
RANK_REASON A law firm used an AI model in a court filing, leading to an error and subsequent court order.