Ollama - Package Updates
-
[1.2.1]
- Update ollama to 0.14.1
- Full Changelog
- fix macOS auto-update signature verification failure
-
[1.2.3]
- Update ollama to 0.14.3
- Full Changelog
- Z-Image Turbo: 6 billion parameter text-to-image model from Alibabas Tongyi Lab. It generates high-quality photorealistic images.
- Flux.2 Klein: Black Forest Labs fastest image-generation models to date.
- Fixed issue where Ollama's macOS app would interrupt system shutdown
- Fixed
ollama createandollama showcommands for experimental models - The
/api/generateAPI can now be used for image generation - Fixed minor issues in Nemotron-3-Nano tool parsing
- Fixed issue where removing an image generation model would cause it to first load
- Fixed issue where
ollama rmwould only stop the first model in the list if it were running
-
[1.3.0]
- Update ollama to 0.15.0
- Full Changelog
- A new
ollama launchcommand to use Ollama's models with Claude Code, Codex, OpenCode, and Droid without separate configuration. - New
ollama launchcommand for Claude Code, Codex, OpenCode, and Droid - Fixed issue where creating multi-line strings with
"""would not work when usingollama run - <kbd>Ctrl</kbd>+<kbd>J</kbd> and <kbd>Shift</kbd>+<kbd>Enter</kbd> now work for inserting newlines in
ollama run - Reduced memory usage for GLM-4.7-Flash models
-
[1.3.1]
- Update ollama to 0.15.1
- Full Changelog
- GLM-4.7-Flash performance and correctness improvements, fixing repetitive answers and tool calling quality
- Fixed performance issues on macOS and arm64 Linux
- Fixed issue where
ollama launchwould not detectclaudeand would incorrectly updateopencodeconfigurations
-
[1.3.2]
- Update ollama to 0.15.2
- Full Changelog
- New
ollama launch clawdbotcommand for launching Clawdbot using Ollama models
-
[1.3.3]
- Update ollama to 0.15.4
- Full Changelog
- ollama launch openclaw will now enter the standard OpenClaw onboarding flow if this has not yet been completed.
- Renamed ollama launch clawdbot to ollama launch openclaw to reflect the project's new name
- Improved tool calling for Ministral models
- ollama launch will now use the value of OLLAMA_HOST when running it
-
[1.3.4]
- Update ollama to 0.15.5
- Full Changelog
- Improvements to
ollama launch - Sub-agent support for
ollama launchfor planning, deep research, and similar tasks ollama signinwill now open a browser window to make signing in easier- Ollama will now default to the following context lengths based on VRAM:
- GLM-4.7-Flash support on Ollama's experimental MLX engine
ollama signinwill now open the browser to the connect page- Fixed off by one error when using
num_predictin the API - Fixed issue where tokens from a previous sequence would be returned when hitting
num_predict
-
[1.3.5]
- Update ollama to 0.15.6
- Full Changelog
- Fixed context limits when running
ollama launch droid ollama launchwill now download missing models instead of erroring- Fixed bug where
ollama launch claudewould cause context compaction when providing images
-
[1.4.0]
- Update ollama to 0.16.1
- Full Changelog
- Installing Ollama via the
curlinstall script on macOS will now only prompt for your password if its required - Installing Ollama via the
ieminstall script in Windows will now show progress - Image generation models will now respect the
OLLAMA_LOAD_TIMEOUTvariable - GLM-5: A strong reasoning and agentic model from Z.ai with 744B total parameters (40B active), built for complex systems engineering and long-horizon tasks.
- MiniMax-M2.5: a new state-of-the-art large language model designed for real-world productivity and coding tasks.
- The new
ollamacommand makes it easy to launch your favorite apps with models using Ollama - Launch Pi with
ollama launch pi - Improvements to Ollama's MLX runner to support GLM-4.7-Flash
- Ctrl+G will now allow for editing text prompts in a text editor when running a model
-
[1.4.1]
- Update ollama to 0.16.2
- Full Changelog
ollama launch claudenow supports searching the web when using:cloudmodels- Fixed rendering issue when running
ollamain PowerShell - New setting in Ollama's app makes it easier to disable cloud models for sensitive and private tasks where data cannot leave your computer. For Linux or when running
ollama servemanually, setOLLAMA_NO_CLOUD=1. - Fixed issue where experimental image generation models would not run in 0.16.0 and 0.16.1