Researchers have developed a new architecture using Trusted Execution Environments (TEEs) to make AI-assisted grant evaluations auditable without revealing the underlying models or scoring rubrics. This system allows external verifiers to confirm the specific model, rubric, and prompt template used, while also safeguarding against prompt injection risks by normalizing and sanitizing applicant documents. The proposed method creates a verifiable record of the evaluation process, enhancing accountability in public agencies considering LLMs for decision support. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Enhances transparency and accountability in AI-driven public sector decision-making.
RANK_REASON Academic paper proposing a novel architecture for auditable AI evaluations.