Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Ollama
  3. Ollama - Package Updates

Ollama - Package Updates

Scheduled Pinned Locked Moved Ollama
32 Posts 2 Posters 3.8k Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Package UpdatesP Offline
    Package UpdatesP Offline
    Package Updates
    wrote on last edited by
    #23

    [1.3.2]

    • Update ollama to 0.15.2
    • Full Changelog
    • New ollama launch clawdbot command for launching Clawdbot using Ollama models
    1 Reply Last reply
    0
    • Package UpdatesP Offline
      Package UpdatesP Offline
      Package Updates
      wrote last edited by
      #24

      [1.3.3]

      • Update ollama to 0.15.4
      • Full Changelog
      • ollama launch openclaw will now enter the standard OpenClaw onboarding flow if this has not yet been completed.
      • Renamed ollama launch clawdbot to ollama launch openclaw to reflect the project's new name
      • Improved tool calling for Ministral models
      • ollama launch will now use the value of OLLAMA_HOST when running it
      1 Reply Last reply
      0
      • Package UpdatesP Offline
        Package UpdatesP Offline
        Package Updates
        wrote last edited by
        #25

        [1.3.4]

        • Update ollama to 0.15.5
        • Full Changelog
        • Improvements to ollama launch
        • Sub-agent support for ollama launch for planning, deep research, and similar tasks
        • ollama signin will now open a browser window to make signing in easier
        • Ollama will now default to the following context lengths based on VRAM:
        • GLM-4.7-Flash support on Ollama's experimental MLX engine
        • ollama signin will now open the browser to the connect page
        • Fixed off by one error when using num_predict in the API
        • Fixed issue where tokens from a previous sequence would be returned when hitting num_predict
        1 Reply Last reply
        0
        • Package UpdatesP Offline
          Package UpdatesP Offline
          Package Updates
          wrote last edited by
          #26

          [1.3.5]

          • Update ollama to 0.15.6
          • Full Changelog
          • Fixed context limits when running ollama launch droid
          • ollama launch will now download missing models instead of erroring
          • Fixed bug where ollama launch claude would cause context compaction when providing images
          1 Reply Last reply
          0
          • Package UpdatesP Offline
            Package UpdatesP Offline
            Package Updates
            wrote last edited by
            #27

            [1.4.0]

            • Update ollama to 0.16.1
            • Full Changelog
            • Installing Ollama via the curl install script on macOS will now only prompt for your password if its required
            • Installing Ollama via the iem install script in Windows will now show progress
            • Image generation models will now respect the OLLAMA_LOAD_TIMEOUT variable
            • GLM-5: A strong reasoning and agentic model from Z.ai with 744B total parameters (40B active), built for complex systems engineering and long-horizon tasks.
            • MiniMax-M2.5: a new state-of-the-art large language model designed for real-world productivity and coding tasks.
            • The new ollama command makes it easy to launch your favorite apps with models using Ollama
            • Launch Pi with ollama launch pi
            • Improvements to Ollama's MLX runner to support GLM-4.7-Flash
            • Ctrl+G will now allow for editing text prompts in a text editor when running a model
            1 Reply Last reply
            0
            • Package UpdatesP Offline
              Package UpdatesP Offline
              Package Updates
              wrote last edited by
              #28

              [1.4.1]

              • Update ollama to 0.16.2
              • Full Changelog
              • ollama launch claude now supports searching the web when using :cloud models
              • Fixed rendering issue when running ollama in PowerShell
              • New setting in Ollama's app makes it easier to disable cloud models for sensitive and private tasks where data cannot leave your computer. For Linux or when running ollama serve manually, set OLLAMA_NO_CLOUD=1.
              • Fixed issue where experimental image generation models would not run in 0.16.0 and 0.16.1
              1 Reply Last reply
              0
              • Package UpdatesP Offline
                Package UpdatesP Offline
                Package Updates
                wrote last edited by
                #29

                [1.4.2]

                • Update ollama to 0.16.3
                • Full Changelog
                • New ollama launch cline added for the Cline CLI
                • ollama launch <integration> will now always show the model picker
                • Added Gemma 3, Llama and Qwen 3 architectures to MLX runner
                1 Reply Last reply
                0
                • Package UpdatesP Offline
                  Package UpdatesP Offline
                  Package Updates
                  wrote last edited by
                  #30

                  [1.5.0]

                  • Update ollama to 0.17.0
                  • Full Changelog
                  • OpenClaw can now be installed and configured automatically via Ollama, making it the easiest way to get up and running with OpenClaw with open models like Kimi-K2.5, GLM-5, and Minimax-M2.5.
                  • When using cloud models, websearch is enabled - allowing OpenClaw to search the internet.
                  • Improved tokenizer performance
                  • Ollama's macOS and Windows apps will now default to a context length based on available VRAM
                  1 Reply Last reply
                  0
                  • Package UpdatesP Offline
                    Package UpdatesP Offline
                    Package Updates
                    wrote last edited by
                    #31

                    [1.5.1]

                    • Update ollama to 0.17.4
                    • Full Changelog
                    • Tool call indices will now be included in parallel tool calls
                    • Fixed issue where tool calls in the Qwen 3 and Qwen 3.5 model families would not be parsed correctly if emitted during thinking
                    • Fixed issue where Ollama's app on Windows would crash when a new update has been downloaded
                    • Nemotron architecture support in Ollama's engine
                    • MLX engine now has improved memory usage
                    • Ollama's app will now allow models that support tools to use web search capabilities
                    • Improved LFM2 and LFM2.5 models in Ollama's engine
                    • ollama create will no longer default to affine quantization for unquantized models when using the MLX engine
                    • Added configuration for disabling automatic update downloading
                    1 Reply Last reply
                    0
                    • Package UpdatesP Offline
                      Package UpdatesP Offline
                      Package Updates
                      wrote last edited by
                      #32

                      [1.5.2]

                      • Update ollama to 0.17.5
                      • Full Changelog
                      • Qwen3.5: the small Qwen 3.5 model series is now available in 0.8B, 2B, 4B and 9B parameter sizes.
                      • Fixed crash in Qwen 3.5 models when split over GPU & CPU
                      • Fixed issue where Qwen 3.5 models would repeat themselves due to no presence penalty (note: you may have to redownload the qwen3.5 models: ollama pull qwen3.5:35b for example)
                      • ollama run --verbose will now show peak memory usage when using Ollama's MLX engine
                      • Fixed memory issues and crashes in MLX runner
                      • Fixed issue where Ollama would not be able to run models imported from Qwen3.5 GGUF files
                      1 Reply Last reply
                      0

                      Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                      Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                      With your input, this post could be even better 💗

                      Register Login
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search