PulseAugur
LIVE 09:34:16
research · [2 sources] ·
0
research

Language models learn to abstain from answering when unsure, improving correctness

Researchers have developed a post-hoc framework called Conformal Abstention (CA) to help language models determine when they should abstain from answering a query. This method aims to reduce hallucinations by providing finite-sample guarantees on both the likelihood of participation and the correctness of responses. CA utilizes prediction confidence, calibrated by the model's internal representation geometry, to measure knowledge involvement in response generation. Experiments show this approach significantly improves selective answering capabilities, achieving 75 percent conditional correctness. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Introduces a method to improve language model reliability by enabling them to admit ignorance, potentially reducing hallucinations and increasing trust in their outputs.

RANK_REASON This is a research paper detailing a new framework for language models.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Rui Xu, Yi Chen, Sihong Xie, Hui Xiong ·

    Geometry-Calibrated Conformal Abstention for Language Models

    arXiv:2604.27914v1 Announce Type: new Abstract: When language models lack relevant knowledge for a given query, they frequently generate plausible responses that can be hallucinations, rather than admitting being agnostic about the answer. Retraining models to reward admitting ig…

  2. arXiv cs.CL TIER_1 · Hui Xiong ·

    Geometry-Calibrated Conformal Abstention for Language Models

    When language models lack relevant knowledge for a given query, they frequently generate plausible responses that can be hallucinations, rather than admitting being agnostic about the answer. Retraining models to reward admitting ignorance can lead to overly conservative behavior…