Researchers have identified "lexical task heads" within large language models that appear to represent the task itself, regardless of how the prompt is phrased. These heads are shared across different prompting styles, such as instruction-based and example-based prompts. The study suggests that variations in model behavior and performance can be explained by the activation levels of these heads, with competing representations sometimes leading to errors. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Provides insight into LLM prompt sensitivity and internal workings, potentially aiding in more robust prompt engineering.
RANK_REASON Academic paper detailing findings on LLM internal representations.