PulseAugur
LIVE 06:39:41
research · [2 sources] ·
0
research

New MoRFI method identifies latent directions causing LLM hallucinations

Researchers have developed MoRFI (Monotonic Sparse Autoencoder Feature Identification) to better understand how large language models hallucinate. By fine-tuning models like Llama 3.1 8B and Gemma 2 9B on new knowledge, they observed that prolonged training exacerbates hallucinations. MoRFI analyzes the models' internal states to identify specific directions in the residual stream that are causally linked to these factual inaccuracies, enabling targeted interventions to recover correct knowledge. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Provides a method to diagnose and potentially mitigate hallucinations in LLMs by identifying specific internal knowledge retrieval pathways.

RANK_REASON Academic paper introducing a new method for analyzing LLM behavior.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Dimitris Dimakopoulos, Shay B. Cohen, Ioannis Konstas ·

    MoRFI: Monotonic Sparse Autoencoder Feature Identification

    arXiv:2604.26866v1 Announce Type: new Abstract: Large language models (LLMs) acquire most of their factual knowledge during the pre-training stage, through next token prediction. Subsequent stages of post-training often introduce new facts outwith the parametric knowledge, giving…

  2. arXiv cs.CL TIER_1 · Ioannis Konstas ·

    MoRFI: Monotonic Sparse Autoencoder Feature Identification

    Large language models (LLMs) acquire most of their factual knowledge during the pre-training stage, through next token prediction. Subsequent stages of post-training often introduce new facts outwith the parametric knowledge, giving rise to hallucinations. While it has been demon…