A journalist has highlighted DeepSeek's tendency to fabricate biographical details, a problem known as AI hallucination. This issue, where large language models confidently present incorrect information as fact, is a persistent challenge in the industry. China's legal system is beginning to address AI hallucination, with the first infringement case related to AI recommendations appearing in the Supreme People's Court work report. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Highlights the persistent challenge of AI hallucination, potentially impacting user trust and leading to legal precedents.
RANK_REASON The article discusses a known issue with LLMs (hallucination) and provides examples, but does not announce a new model release or paradigm shift.