A new position paper introduces the Theory of Agent (ToA) framework, proposing that AI agents should only use external tools when it is epistemically necessary. This means a task cannot be reliably completed using only the agent's internal reasoning and current context. The paper argues that common agent failures, such as overthinking or excessive delegation, stem from misjudgments about uncertainty rather than inherent reasoning flaws. Adhering to this principle is crucial for developing more intelligent and efficient agents. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Proposes a new framework for agent decision-making, potentially improving efficiency and intelligence by limiting unnecessary external tool use.
RANK_REASON This is a research paper published on arXiv proposing a new theoretical framework for AI agent behavior. [lever_c_demoted from research: ic=1 ai=1.0]