A new benchmark called CiteAudit has been developed to address the issue of hallucinated citations generated by large language models in scientific research. This framework uses a multi-agent pipeline to extract claims, retrieve evidence, and verify if cited sources support their claims. The system aims to improve the trustworthiness of scientific references by providing a scalable infrastructure for auditing citations, especially as manual verification becomes impractical. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Provides a new tool to audit LLM-generated citations, improving the integrity of scientific literature.
RANK_REASON The cluster describes a new benchmark and framework for verifying scientific references in academic papers, which is a research contribution.