PulseAugur
LIVE 13:46:59
research · [1 source] ·
0
research

Extrinsic Hallucinations in LLMs

Lilian Weng's latest post delves into extrinsic hallucinations in large language models, defining them as generated content that is fabricated and not grounded in provided context or world knowledge. The piece explores how issues in pre-training data and the learning process during fine-tuning can contribute to these factual inaccuracies. Research suggests that while models struggle to learn new information during fine-tuning, attempting to do so can paradoxically increase their tendency to hallucinate. AI

Summary written by None from 1 source. How we write summaries →

RANK_REASON Blog post discussing research on LLM hallucinations and their causes.

Read on Lil'Log (Lilian Weng) →

Extrinsic Hallucinations in LLMs

COVERAGE [1]

  1. Lil'Log (Lilian Weng) TIER_1 ·

    Extrinsic Hallucinations in LLMs

    <p>Hallucination in large language models usually refers to the model generating unfaithful, fabricated, inconsistent, or nonsensical content. As a term, hallucination has been somewhat generalized to cases when the model makes mistakes. Here, I would like to narrow down the prob…