Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Ollama
  3. Ollama - Package Updates

Ollama - Package Updates

Scheduled Pinned Locked Moved Ollama
17 Posts 2 Posters 1.3k Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Package UpdatesP Offline
    Package UpdatesP Offline
    Package Updates
    App Dev
    wrote on last edited by
    #8

    [1.0.0]

    • First stable package release with ollama 0.12.9
    1 Reply Last reply
    0
    • Package UpdatesP Offline
      Package UpdatesP Offline
      Package Updates
      App Dev
      wrote on last edited by
      #9

      [1.0.1]

      • Update ollama to 0.12.10
      • Full Changelog
      • ollama run now works with embedding models
      • Fixed errors when running qwen3-vl:235b and qwen3-vl:235b-instruct
      • Enable flash attention for Vulkan (currently needs to be built from source)
      • Add Vulkan memory detection for Intel GPU using DXGI+PDH
      • Ollama will now return tool call IDs from the /api/chat API
      • Fixed hanging due to CPU discovery
      • Ollama will now show login instructions when switching to a cloud model in interactive mode
      • Fix reading stale VRAM data
      1 Reply Last reply
      0
      • Package UpdatesP Offline
        Package UpdatesP Offline
        Package Updates
        App Dev
        wrote on last edited by
        #10

        [1.0.2]

        • Update ollama to 0.12.11
        • Full Changelog
        • Ollama's API and the OpenAI-compatible API now supports Logprobs
        • Ollama's new app now supports WebP images
        • Improved rendering performance in Ollama's new app, especially when rendering code
        • The "required" field in tool definitions will now be omitted if not specified
        • Fixed issue where "tool_call_id" would be omitted when using the OpenAI-compatible API.
        • Fixed issue where ollama create would import data from both consolidated.safetensors and other safetensor files.
        • Ollama will now prefer dedicated GPUs over iGPUs when scheduling models
        • Vulkan can now be enabled by setting OLLAMA_VULKAN=1. For example: OLLAMA_VULKAN=1 ollama serve
        1 Reply Last reply
        0
        • Package UpdatesP Offline
          Package UpdatesP Offline
          Package Updates
          App Dev
          wrote last edited by
          #11

          [1.1.0]

          • Update ollama to 0.13.0
          • Full Changelog
          • DeepSeek-OCR is now supported
          • DeepSeek-V3.1 architecture is now supported in Ollama's engine
          • Fixed performance issues that arose in Ollama 0.12.11 on CUDA
          • Fixed issue where Linux install packages were missing required Vulkan libraries
          • Improved CPU and memory detection while in containers/cgroups
          • Improved VRAM information detection for AMD GPUs
          • Improved KV cache performance to no longer require defragmentation
          1 Reply Last reply
          0
          • Package UpdatesP Offline
            Package UpdatesP Offline
            Package Updates
            App Dev
            wrote last edited by
            #12

            [1.1.1]

            • Update ollama to 0.13.1
            • Full Changelog
            • nomic-embed-text will now use Ollama's engine by default
            • Tool calling support for cogito-v2.1
            • Fixed issues with CUDA VRAM discovery
            • Fixed link to docs in Ollama's app
            • Fixed issue where models would be evicted on CPU-only systems
            • Ollama will now better render errors instead of showing Unmarshal: errors
            • Fixed issue where CUDA GPUs would fail to be detected with older GPUs
            • Added thinking and tool parsing for cogito-v2.1
            1 Reply Last reply
            0
            • Package UpdatesP Offline
              Package UpdatesP Offline
              Package Updates
              App Dev
              wrote last edited by
              #13

              [1.1.2]

              • Increase the proxy read timeout to 1h
              1 Reply Last reply
              0
              • Package UpdatesP Offline
                Package UpdatesP Offline
                Package Updates
                App Dev
                wrote last edited by
                #14

                [1.1.3]

                • Disable body size check within the app
                1 Reply Last reply
                0
                • Package UpdatesP Offline
                  Package UpdatesP Offline
                  Package Updates
                  App Dev
                  wrote last edited by
                  #15

                  [1.1.4]

                  • Update ollama to 0.13.3
                  1 Reply Last reply
                  0
                  • Package UpdatesP Offline
                    Package UpdatesP Offline
                    Package Updates
                    App Dev
                    wrote last edited by
                    #16

                    [1.1.5]

                    • Update ollama to 0.13.4
                    • Full Changelog
                    • Nemotron 3 Nano: A new Standard for Efficient, Open, and Intelligent Agentic Models
                    • Olmo 3 and Olmo 3.1: A series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.
                    • Enable Flash Attention automatically for models by default
                    • Fixed handling of long contexts with Gemma 3 models
                    • Fixed issue that would occur with Gemma 3 QAT models or other models imported with the Gemma 3 architecture
                    1 Reply Last reply
                    0
                    • Package UpdatesP Offline
                      Package UpdatesP Offline
                      Package Updates
                      App Dev
                      wrote last edited by
                      #17

                      [1.1.6]

                      • Update ollama to 0.13.5
                      • Full Changelog
                      • Google's FunctionGemma is now available on Ollama
                      • bert architecture models now run on Ollama's engine
                      • Added built-in renderer & tool parsing capabilities for DeepSeek-V3.1
                      • Fixed issue where nested properties in tools may not have been rendered properly
                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search