Researchers have developed a new decoding framework called Context-Fidelity Boosting (CFB) to reduce hallucinations in large language models. CFB works by increasing the generation probability of tokens that are supported by the input context, using principles similar to watermarking techniques. This method requires no retraining and is compatible with various LLMs, showing consistent improvements in faithfulness for tasks like summarization and question answering with minimal overhead. The implementation of CFB has been made open-source. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Enhances LLM output reliability by reducing hallucinations, improving performance on tasks requiring factual adherence to context.
RANK_REASON Academic paper introducing a new method for improving LLM faithfulness.