Ollama has released version 0.23.2, introducing several key changes. The "ollama launch" command has been updated to exclude Claude Desktop by default, requiring a specific flag to restore it due to Anthropic's model limitations. Performance enhancements include caching for "/api/show" responses, resulting in a significant latency reduction for integrations like VS Code. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Improves the performance and user experience of local LLM deployment tools.
RANK_REASON This is a software release for a tool that facilitates running LLMs locally, not a new model release from a frontier lab.