A new research paper demonstrates that providing in-context examples to large language models can inadvertently suppress their ability to recall and utilize scientific knowledge. The study found that even when examples are derived from the same scientific formulas, models tend to shift their computation towards empirical pattern fitting rather than knowledge-driven derivation. This phenomenon was observed across 60 tasks in five scientific domains and four different models, suggesting a cautionary note for practitioners deploying LLMs in scientific applications. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT In-context examples may hinder LLMs' scientific knowledge recall, shifting focus to pattern fitting over derivation.
RANK_REASON Academic paper detailing a novel finding about LLM behavior.