Ollama - Package Updates
-
[1.9.1]
- Update ollama to 0.21.1
- Full Changelog
- MLX runner adds logprobs support for compatible models
- Faster MLX sampling with fused top-P and top-K in a single sort pass, plus repeat penalties applied in the sampler
- Improved MLX prompt tokenization by moving tokenization into request handler goroutines
- Better MLX thread safety for array management
- GLM4 MoE Lite performance improvement with a fused sigmoid router head
- Fixed model picker showing stale model after switching chats in the macOS app
- Fixed structured outputs for Gemma 4 when
think=false - Full Changelog: https://github.com/ollama/ollama/compare/v0.21.0...v0.21.1
-
[1.9.2]
- Update ollama to 0.21.2
- Full Changelog
- Improved reliability of the OpenClaw onboarding flow in
ollama launch - Recommended models in
ollama launchnow appear in a fixed, canonical order - OpenClaw integration now bundles Ollama's web search plugin in OpenClaw
-
[1.10.0]
- Update ollama to 0.22.0
- Full Changelog
- NVIDIA's Nemotron 3 Omni
- Poolside's first open-weight coding model - Laguna XS.2
- Full Changelog: https://github.com/ollama/ollama/compare/v0.21.2...v0.22.0
-
[1.10.1]
- Update ollama to 0.22.1
- Full Changelog
- Updated the Gemma 4 renderer for thinking and tool calling improvements
- Model recommendations are now updated without updating Ollama
- Aligned the desktop app's launch page with
ollama launchintegrations - Fixed the Poolside integration title in
ollama launch - Full Changelog: https://github.com/ollama/ollama/compare/v0.22.0...v0.22.1
-
[1.11.0]
- Update ollama to 0.23.0
- Full Changelog
- Launch Claude Desktop with
ollama launch claude-desktop - The Ollama app now surfaces featured models from server-driven recommendations
- Fixed OpenClaw gateway timeout on Windows by enforcing IPv4 loopback (thanks @UniquePratham)
- Hardened Metal initialization to gracefully handle ggml kernel compilation failures
-
[1.11.1]
- Update ollama to 0.23.1
- Full Changelog
- Update MLX and MLX-C with threading fixes by @dhiltgen in #15845
- go: bump to 1.26 by @ParthSareen in #15904
- Add Gemma 4 MTP speculative decoding by @pdevine in #15980
-
[1.11.2]
- Update ollama to 0.23.2
- Full Changelog
ollama launchno longer includes Claude Desktop due to the third-party integration being limited to Anthropic models.- Use
ollama launch claude-desktop --restoreto restore Claude Desktop to its normal state. /api/showresponses are now cached, improving median latency by ~6.7x which will increase load speed for integrations like VS Code.- Improved backup workflow when managing launch integrations
- Cleaner image generation layout in the MLX runner
- Full Changelog: https://github.com/ollama/ollama/compare/v0.23.1...v0.23.2
-
[1.11.3]
- Update ollama to 0.23.3
- Full Changelog
- mlx: refined model push behavior by @dhiltgen in #15431
- app: harden update flows by @dhiltgen in #16100
- mlx: update the imagegen runner for mlx thread affinity by @pdevine in #16096
- mlx: avoid status timeout during inference by @dhiltgen in #16086
- mlx: fix macOS 26 target leakage in v3 metallib by @dhiltgen in #16053
-
[1.11.4]
- Update ollama to 0.23.4
- Full Changelog
ollama launch opencodenow supports vision models with image inputs- Fixed formatting of Claude tool results when using local image paths
-
[1.12.0]
- Update ollama to 0.24.0
- Full Changelog
- The OpenAI Codex App is now available on Ollama. Use any Ollama model local or cloud inside the desktop app to code, browse, and review.
- Codex can spin up local servers and sites in its built-in browser. Annotate directly on the page to request changes.
- Review code inside the app, leave comments, and iterate without leaving your workspace.
- Reworked the MLX sampler for improved generation quality on Apple Silicon
- Full Changelog: https://github.com/ollama/ollama/compare/v0.23.0...v0.24.0
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login