PulseAugur
LIVE 12:28:34
research · [1 source] ·
0
research

OpenAI's GPT-3 learns to express calibrated uncertainty in natural language

OpenAI has demonstrated a GPT-3 model capable of expressing its uncertainty in natural language, without relying on internal model probabilities. The model can generate both an answer and a confidence level, such as "90% confidence," which are well-calibrated and maintain moderate calibration even when faced with shifts in data distribution. This research marks the first instance of a model verbally communicating calibrated uncertainty about its own responses and introduces a new testing suite called CalibratedMath. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

RANK_REASON Academic paper detailing a new capability for language models.

Read on OpenAI News →

OpenAI's GPT-3 learns to express calibrated uncertainty in natural language

COVERAGE [1]

  1. OpenAI News TIER_1 ·

    Teaching models to express their uncertainty in words