Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Ollama
  3. Ollama - Package Updates

Ollama - Package Updates

Scheduled Pinned Locked Moved Ollama
27 Posts 2 Posters 2.9k Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Package UpdatesP Online
    Package UpdatesP Online
    Package Updates
    wrote on last edited by
    #18

    [1.2.0]

    • Update ollama to 0.14.0
    • Full Changelog
    • ollama run --experimental CLI will now open a new Ollama CLI that includes an agent loop and the bash tool
    • Anthropic API compatibility: support for the /v1/messages API
    • A new REQUIRES command for the Modelfile allows declaring which version of Ollama is required for the model
    • For older models, Ollama will avoid an integer underflow on low VRAM systems during memory estimation
    • More accurate VRAM measurements for AMD iGPUs
    • Ollama's app will now highlight swift soure code
    • An error will now return when embeddings return NaN or -Inf
    • Ollama's Linux install bundles files now use zst compression
    • New experimental support for image generation models, powered by MLX
    1 Reply Last reply
    0
    • Package UpdatesP Online
      Package UpdatesP Online
      Package Updates
      wrote last edited by
      #19

      [1.2.1]

      • Update ollama to 0.14.1
      • Full Changelog
      • fix macOS auto-update signature verification failure
      1 Reply Last reply
      0
      • Package UpdatesP Online
        Package UpdatesP Online
        Package Updates
        wrote last edited by
        #20

        [1.2.3]

        • Update ollama to 0.14.3
        • Full Changelog
        • Z-Image Turbo: 6 billion parameter text-to-image model from Alibabas Tongyi Lab. It generates high-quality photorealistic images.
        • Flux.2 Klein: Black Forest Labs fastest image-generation models to date.
        • Fixed issue where Ollama's macOS app would interrupt system shutdown
        • Fixed ollama create and ollama show commands for experimental models
        • The /api/generate API can now be used for image generation
        • Fixed minor issues in Nemotron-3-Nano tool parsing
        • Fixed issue where removing an image generation model would cause it to first load
        • Fixed issue where ollama rm would only stop the first model in the list if it were running
        1 Reply Last reply
        0
        • Package UpdatesP Online
          Package UpdatesP Online
          Package Updates
          wrote last edited by
          #21

          [1.3.0]

          • Update ollama to 0.15.0
          • Full Changelog
          • A new ollama launch command to use Ollama's models with Claude Code, Codex, OpenCode, and Droid without separate configuration.
          • New ollama launch command for Claude Code, Codex, OpenCode, and Droid
          • Fixed issue where creating multi-line strings with """ would not work when using ollama run
          • <kbd>Ctrl</kbd>+<kbd>J</kbd> and <kbd>Shift</kbd>+<kbd>Enter</kbd> now work for inserting newlines in ollama run
          • Reduced memory usage for GLM-4.7-Flash models
          1 Reply Last reply
          0
          • Package UpdatesP Online
            Package UpdatesP Online
            Package Updates
            wrote last edited by
            #22

            [1.3.1]

            • Update ollama to 0.15.1
            • Full Changelog
            • GLM-4.7-Flash performance and correctness improvements, fixing repetitive answers and tool calling quality
            • Fixed performance issues on macOS and arm64 Linux
            • Fixed issue where ollama launch would not detect claude and would incorrectly update opencode configurations
            1 Reply Last reply
            1
            • Package UpdatesP Online
              Package UpdatesP Online
              Package Updates
              wrote last edited by
              #23

              [1.3.2]

              • Update ollama to 0.15.2
              • Full Changelog
              • New ollama launch clawdbot command for launching Clawdbot using Ollama models
              1 Reply Last reply
              0
              • Package UpdatesP Online
                Package UpdatesP Online
                Package Updates
                wrote last edited by
                #24

                [1.3.3]

                • Update ollama to 0.15.4
                • Full Changelog
                • ollama launch openclaw will now enter the standard OpenClaw onboarding flow if this has not yet been completed.
                • Renamed ollama launch clawdbot to ollama launch openclaw to reflect the project's new name
                • Improved tool calling for Ministral models
                • ollama launch will now use the value of OLLAMA_HOST when running it
                1 Reply Last reply
                0
                • Package UpdatesP Online
                  Package UpdatesP Online
                  Package Updates
                  wrote last edited by
                  #25

                  [1.3.4]

                  • Update ollama to 0.15.5
                  • Full Changelog
                  • Improvements to ollama launch
                  • Sub-agent support for ollama launch for planning, deep research, and similar tasks
                  • ollama signin will now open a browser window to make signing in easier
                  • Ollama will now default to the following context lengths based on VRAM:
                  • GLM-4.7-Flash support on Ollama's experimental MLX engine
                  • ollama signin will now open the browser to the connect page
                  • Fixed off by one error when using num_predict in the API
                  • Fixed issue where tokens from a previous sequence would be returned when hitting num_predict
                  1 Reply Last reply
                  0
                  • Package UpdatesP Online
                    Package UpdatesP Online
                    Package Updates
                    wrote last edited by
                    #26

                    [1.3.5]

                    • Update ollama to 0.15.6
                    • Full Changelog
                    • Fixed context limits when running ollama launch droid
                    • ollama launch will now download missing models instead of erroring
                    • Fixed bug where ollama launch claude would cause context compaction when providing images
                    1 Reply Last reply
                    0
                    • Package UpdatesP Online
                      Package UpdatesP Online
                      Package Updates
                      wrote last edited by
                      #27

                      [1.4.0]

                      • Update ollama to 0.16.1
                      • Full Changelog
                      • Installing Ollama via the curl install script on macOS will now only prompt for your password if its required
                      • Installing Ollama via the iem install script in Windows will now show progress
                      • Image generation models will now respect the OLLAMA_LOAD_TIMEOUT variable
                      • GLM-5: A strong reasoning and agentic model from Z.ai with 744B total parameters (40B active), built for complex systems engineering and long-horizon tasks.
                      • MiniMax-M2.5: a new state-of-the-art large language model designed for real-world productivity and coding tasks.
                      • The new ollama command makes it easy to launch your favorite apps with models using Ollama
                      • Launch Pi with ollama launch pi
                      • Improvements to Ollama's MLX runner to support GLM-4.7-Flash
                      • Ctrl+G will now allow for editing text prompts in a text editor when running a model
                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search