This paper proposes a cognitive-semantic framework for understanding how prompts influence large language model behavior. It introduces concepts like frame activation, salience control, and construal selection to explain how prompts act as semantic conditions that guide model interpretation and task structuring. The research demonstrates that prompts can alter model judgments, evidence usage, and answer organization in tasks like natural language inference and question answering, suggesting a shift from viewing prompting solely as a performance enhancement to analyzing its semantic control capabilities. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Provides a theoretical lens for understanding and potentially improving prompt engineering techniques for LLMs.
RANK_REASON This is a research paper published on arXiv detailing a new theoretical framework for prompt engineering. [lever_c_demoted from research: ic=1 ai=1.0]