Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Ollama
  3. Ollama - Package Updates

Ollama - Package Updates

Scheduled Pinned Locked Moved Ollama
21 Posts 2 Posters 2.0k Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Package UpdatesP Offline
    Package UpdatesP Offline
    Package Updates
    wrote on last edited by
    #12

    [1.1.1]

    • Update ollama to 0.13.1
    • Full Changelog
    • nomic-embed-text will now use Ollama's engine by default
    • Tool calling support for cogito-v2.1
    • Fixed issues with CUDA VRAM discovery
    • Fixed link to docs in Ollama's app
    • Fixed issue where models would be evicted on CPU-only systems
    • Ollama will now better render errors instead of showing Unmarshal: errors
    • Fixed issue where CUDA GPUs would fail to be detected with older GPUs
    • Added thinking and tool parsing for cogito-v2.1
    1 Reply Last reply
    0
    • Package UpdatesP Offline
      Package UpdatesP Offline
      Package Updates
      wrote on last edited by
      #13

      [1.1.2]

      • Increase the proxy read timeout to 1h
      1 Reply Last reply
      0
      • Package UpdatesP Offline
        Package UpdatesP Offline
        Package Updates
        wrote on last edited by
        #14

        [1.1.3]

        • Disable body size check within the app
        1 Reply Last reply
        0
        • Package UpdatesP Offline
          Package UpdatesP Offline
          Package Updates
          wrote on last edited by
          #15

          [1.1.4]

          • Update ollama to 0.13.3
          1 Reply Last reply
          0
          • Package UpdatesP Offline
            Package UpdatesP Offline
            Package Updates
            wrote on last edited by
            #16

            [1.1.5]

            • Update ollama to 0.13.4
            • Full Changelog
            • Nemotron 3 Nano: A new Standard for Efficient, Open, and Intelligent Agentic Models
            • Olmo 3 and Olmo 3.1: A series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.
            • Enable Flash Attention automatically for models by default
            • Fixed handling of long contexts with Gemma 3 models
            • Fixed issue that would occur with Gemma 3 QAT models or other models imported with the Gemma 3 architecture
            1 Reply Last reply
            0
            • Package UpdatesP Offline
              Package UpdatesP Offline
              Package Updates
              wrote on last edited by
              #17

              [1.1.6]

              • Update ollama to 0.13.5
              • Full Changelog
              • Google's FunctionGemma is now available on Ollama
              • bert architecture models now run on Ollama's engine
              • Added built-in renderer & tool parsing capabilities for DeepSeek-V3.1
              • Fixed issue where nested properties in tools may not have been rendered properly
              1 Reply Last reply
              0
              • Package UpdatesP Offline
                Package UpdatesP Offline
                Package Updates
                wrote last edited by
                #18

                [1.2.0]

                • Update ollama to 0.14.0
                • Full Changelog
                • ollama run --experimental CLI will now open a new Ollama CLI that includes an agent loop and the bash tool
                • Anthropic API compatibility: support for the /v1/messages API
                • A new REQUIRES command for the Modelfile allows declaring which version of Ollama is required for the model
                • For older models, Ollama will avoid an integer underflow on low VRAM systems during memory estimation
                • More accurate VRAM measurements for AMD iGPUs
                • Ollama's app will now highlight swift soure code
                • An error will now return when embeddings return NaN or -Inf
                • Ollama's Linux install bundles files now use zst compression
                • New experimental support for image generation models, powered by MLX
                1 Reply Last reply
                0
                • Package UpdatesP Offline
                  Package UpdatesP Offline
                  Package Updates
                  wrote last edited by
                  #19

                  [1.2.1]

                  • Update ollama to 0.14.1
                  • Full Changelog
                  • fix macOS auto-update signature verification failure
                  1 Reply Last reply
                  0
                  • Package UpdatesP Offline
                    Package UpdatesP Offline
                    Package Updates
                    wrote last edited by
                    #20

                    [1.2.3]

                    • Update ollama to 0.14.3
                    • Full Changelog
                    • Z-Image Turbo: 6 billion parameter text-to-image model from Alibabas Tongyi Lab. It generates high-quality photorealistic images.
                    • Flux.2 Klein: Black Forest Labs fastest image-generation models to date.
                    • Fixed issue where Ollama's macOS app would interrupt system shutdown
                    • Fixed ollama create and ollama show commands for experimental models
                    • The /api/generate API can now be used for image generation
                    • Fixed minor issues in Nemotron-3-Nano tool parsing
                    • Fixed issue where removing an image generation model would cause it to first load
                    • Fixed issue where ollama rm would only stop the first model in the list if it were running
                    1 Reply Last reply
                    0
                    • Package UpdatesP Offline
                      Package UpdatesP Offline
                      Package Updates
                      wrote last edited by
                      #21

                      [1.3.0]

                      • Update ollama to 0.15.0
                      • Full Changelog
                      • A new ollama launch command to use Ollama's models with Claude Code, Codex, OpenCode, and Droid without separate configuration.
                      • New ollama launch command for Claude Code, Codex, OpenCode, and Droid
                      • Fixed issue where creating multi-line strings with """ would not work when using ollama run
                      • <kbd>Ctrl</kbd>+<kbd>J</kbd> and <kbd>Shift</kbd>+<kbd>Enter</kbd> now work for inserting newlines in ollama run
                      • Reduced memory usage for GLM-4.7-Flash models
                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search