PulseAugur
LIVE 15:19:11
research · [1 source] ·
0
research

New benchmark tackles LLM-generated fake scientific citations

A new benchmark called CiteAudit has been developed to address the issue of hallucinated citations generated by large language models in scientific research. This framework uses a multi-agent pipeline to extract claims, retrieve evidence, and verify if cited sources support their claims. The system aims to improve the trustworthiness of scientific references by providing a scalable infrastructure for auditing citations, especially as manual verification becomes impractical. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Provides a new tool to audit LLM-generated citations, improving the integrity of scientific literature.

RANK_REASON The cluster describes a new benchmark and framework for verifying scientific references in academic papers, which is a research contribution.

Read on arXiv cs.CL →

COVERAGE [1]

  1. arXiv cs.CL TIER_1 · Zhengqing Yuan, Kaiwen Shi, Zheyuan Zhang, Lichao Sun, Nitesh V. Chawla, Yanfang Ye ·

    CiteAudit: You Cited It, But Did You Read It? A Benchmark for Verifying Scientific References in the LLM Era

    arXiv:2602.23452v2 Announce Type: replace Abstract: Scientific research relies on accurate citation for attribution and integrity, yet large language models (LLMs) introduce a new risk: fabricated references that appear plausible but correspond to no real publications. Such hallu…