Ollama - Package Updates
-
[1.3.0]
- Update ollama to 0.15.0
- Full Changelog
- A new
ollama launchcommand to use Ollama's models with Claude Code, Codex, OpenCode, and Droid without separate configuration. - New
ollama launchcommand for Claude Code, Codex, OpenCode, and Droid - Fixed issue where creating multi-line strings with
"""would not work when usingollama run - <kbd>Ctrl</kbd>+<kbd>J</kbd> and <kbd>Shift</kbd>+<kbd>Enter</kbd> now work for inserting newlines in
ollama run - Reduced memory usage for GLM-4.7-Flash models
-
[1.3.1]
- Update ollama to 0.15.1
- Full Changelog
- GLM-4.7-Flash performance and correctness improvements, fixing repetitive answers and tool calling quality
- Fixed performance issues on macOS and arm64 Linux
- Fixed issue where
ollama launchwould not detectclaudeand would incorrectly updateopencodeconfigurations
-
[1.3.2]
- Update ollama to 0.15.2
- Full Changelog
- New
ollama launch clawdbotcommand for launching Clawdbot using Ollama models
-
[1.3.3]
- Update ollama to 0.15.4
- Full Changelog
- ollama launch openclaw will now enter the standard OpenClaw onboarding flow if this has not yet been completed.
- Renamed ollama launch clawdbot to ollama launch openclaw to reflect the project's new name
- Improved tool calling for Ministral models
- ollama launch will now use the value of OLLAMA_HOST when running it
-
[1.3.4]
- Update ollama to 0.15.5
- Full Changelog
- Improvements to
ollama launch - Sub-agent support for
ollama launchfor planning, deep research, and similar tasks ollama signinwill now open a browser window to make signing in easier- Ollama will now default to the following context lengths based on VRAM:
- GLM-4.7-Flash support on Ollama's experimental MLX engine
ollama signinwill now open the browser to the connect page- Fixed off by one error when using
num_predictin the API - Fixed issue where tokens from a previous sequence would be returned when hitting
num_predict
-
[1.3.5]
- Update ollama to 0.15.6
- Full Changelog
- Fixed context limits when running
ollama launch droid ollama launchwill now download missing models instead of erroring- Fixed bug where
ollama launch claudewould cause context compaction when providing images
-
[1.4.0]
- Update ollama to 0.16.1
- Full Changelog
- Installing Ollama via the
curlinstall script on macOS will now only prompt for your password if its required - Installing Ollama via the
ieminstall script in Windows will now show progress - Image generation models will now respect the
OLLAMA_LOAD_TIMEOUTvariable - GLM-5: A strong reasoning and agentic model from Z.ai with 744B total parameters (40B active), built for complex systems engineering and long-horizon tasks.
- MiniMax-M2.5: a new state-of-the-art large language model designed for real-world productivity and coding tasks.
- The new
ollamacommand makes it easy to launch your favorite apps with models using Ollama - Launch Pi with
ollama launch pi - Improvements to Ollama's MLX runner to support GLM-4.7-Flash
- Ctrl+G will now allow for editing text prompts in a text editor when running a model
-
[1.4.1]
- Update ollama to 0.16.2
- Full Changelog
ollama launch claudenow supports searching the web when using:cloudmodels- Fixed rendering issue when running
ollamain PowerShell - New setting in Ollama's app makes it easier to disable cloud models for sensitive and private tasks where data cannot leave your computer. For Linux or when running
ollama servemanually, setOLLAMA_NO_CLOUD=1. - Fixed issue where experimental image generation models would not run in 0.16.0 and 0.16.1
-
[1.4.2]
- Update ollama to 0.16.3
- Full Changelog
- New
ollama launch clineadded for the Cline CLI ollama launch <integration>will now always show the model picker- Added Gemma 3, Llama and Qwen 3 architectures to MLX runner
-
[1.5.0]
- Update ollama to 0.17.0
- Full Changelog
- OpenClaw can now be installed and configured automatically via Ollama, making it the easiest way to get up and running with OpenClaw with open models like Kimi-K2.5, GLM-5, and Minimax-M2.5.
- When using cloud models, websearch is enabled - allowing OpenClaw to search the internet.
- Improved tokenizer performance
- Ollama's macOS and Windows apps will now default to a context length based on available VRAM