Researchers have developed a new defense framework called IntraGuard to combat the misuse of large language models (LLMs) in academic peer review. This system embeds hidden instructions within manuscripts that disrupt or alter reviews generated by AI, preventing reviewers from fully outsourcing their work to chatbots. IntraGuard operates by inserting heterogeneous defensive text objects into the PDF's structure without changing its visual appearance, achieving up to an 84% defense success rate across various venues and chatbot settings. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Introduces a novel defense against AI-driven academic dishonesty, potentially preserving the integrity of peer review.
RANK_REASON The cluster contains an academic paper detailing a new defense mechanism against AI misuse in peer review. [lever_c_demoted from research: ic=1 ai=1.0]