Ollama - Package Updates
-
[1.1.2]
- Increase the proxy read timeout to 1h
-
[1.1.3]
- Disable body size check within the app
-
[1.1.4]
- Update ollama to 0.13.3
-
[1.1.5]
- Update ollama to 0.13.4
- Full Changelog
- Nemotron 3 Nano: A new Standard for Efficient, Open, and Intelligent Agentic Models
- Olmo 3 and Olmo 3.1: A series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.
- Enable Flash Attention automatically for models by default
- Fixed handling of long contexts with Gemma 3 models
- Fixed issue that would occur with Gemma 3 QAT models or other models imported with the Gemma 3 architecture
-
[1.1.6]
- Update ollama to 0.13.5
- Full Changelog
- Google's FunctionGemma is now available on Ollama
bertarchitecture models now run on Ollama's engine- Added built-in renderer & tool parsing capabilities for DeepSeek-V3.1
- Fixed issue where nested properties in tools may not have been rendered properly
-
[1.2.0]
- Update ollama to 0.14.0
- Full Changelog
- ollama run --experimental CLI will now open a new Ollama CLI that includes an agent loop and the bash tool
- Anthropic API compatibility: support for the /v1/messages API
- A new REQUIRES command for the Modelfile allows declaring which version of Ollama is required for the model
- For older models, Ollama will avoid an integer underflow on low VRAM systems during memory estimation
- More accurate VRAM measurements for AMD iGPUs
- Ollama's app will now highlight swift soure code
- An error will now return when embeddings return NaN or -Inf
- Ollama's Linux install bundles files now use zst compression
- New experimental support for image generation models, powered by MLX
-
[1.2.1]
- Update ollama to 0.14.1
- Full Changelog
- fix macOS auto-update signature verification failure
-
[1.2.3]
- Update ollama to 0.14.3
- Full Changelog
- Z-Image Turbo: 6 billion parameter text-to-image model from Alibabas Tongyi Lab. It generates high-quality photorealistic images.
- Flux.2 Klein: Black Forest Labs fastest image-generation models to date.
- Fixed issue where Ollama's macOS app would interrupt system shutdown
- Fixed
ollama createandollama showcommands for experimental models - The
/api/generateAPI can now be used for image generation - Fixed minor issues in Nemotron-3-Nano tool parsing
- Fixed issue where removing an image generation model would cause it to first load
- Fixed issue where
ollama rmwould only stop the first model in the list if it were running
-
[1.3.0]
- Update ollama to 0.15.0
- Full Changelog
- A new
ollama launchcommand to use Ollama's models with Claude Code, Codex, OpenCode, and Droid without separate configuration. - New
ollama launchcommand for Claude Code, Codex, OpenCode, and Droid - Fixed issue where creating multi-line strings with
"""would not work when usingollama run - <kbd>Ctrl</kbd>+<kbd>J</kbd> and <kbd>Shift</kbd>+<kbd>Enter</kbd> now work for inserting newlines in
ollama run - Reduced memory usage for GLM-4.7-Flash models
-
[1.3.1]
- Update ollama to 0.15.1
- Full Changelog
- GLM-4.7-Flash performance and correctness improvements, fixing repetitive answers and tool calling quality
- Fixed performance issues on macOS and arm64 Linux
- Fixed issue where
ollama launchwould not detectclaudeand would incorrectly updateopencodeconfigurations