A developer documented their extensive use of Anthropic's Claude Code and OpenAI's Codex over two months, consuming over 4 billion tokens in total. April saw a significant surge, with 3.77 billion tokens used, largely driven by parallel agent execution and the adoption of newer models like Opus-4-7 and GPT-5.5. The analysis revealed a high dependency on caching for both tools, with Claude Code primarily using cache reads and Codex relying on cached inputs, which helped manage costs despite higher per-token prices for newer models. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Highlights the practical implications of large-scale AI tool usage and the cost-efficiency of caching mechanisms for developers.
RANK_REASON The article details a user's experience and usage patterns with existing AI coding tools, rather than a new release or significant industry event.