Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Ollama
  3. Ollama - Package Updates

Ollama - Package Updates

Scheduled Pinned Locked Moved Ollama
6 Posts 2 Posters 112 Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • nebulonN Offline
    nebulonN Offline
    nebulon
    Staff
    wrote last edited by
    #1

    You can use this thread to track updates to the Ollama package.

    Please open issues in a separate topic instead of replying here.

    1 Reply Last reply
    0
    • nebulonN nebulon pinned this topic
    • nebulonN Offline
      nebulonN Offline
      nebulon
      Staff
      wrote last edited by
      #2

      [0.1.0]

      • Initial version for Ollama
      1 Reply Last reply
      1
      • Package UpdatesP Offline
        Package UpdatesP Offline
        Package Updates
        App Dev
        wrote last edited by
        #3

        [0.2.0]

        • Update ollama to 0.12.6
        • Full Changelog
        • Ollama's app now supports searching when running DeepSeek-V3.1, Qwen3 and other models that support tool calling.
        • Flash attention is now enabled by default for Gemma 3, improving performance and memory utilization
        • Fixed issue where Ollama would hang while generating responses
        • Fixed issue where qwen3-coder would act in raw mode when using /api/generate or ollama run qwen3-coder <prompt>
        • Fixed qwen3-embedding providing invalid results
        • Ollama will now evict models correctly when num_gpu is set
        • Fixed issue where tool_index with a value of 0 would not be sent to the model
        • Thinking models now support structured outputs when using the /api/chat API
        • Ollama's app will now wait until Ollama is running to allow for a conversation to be started
        • Fixed issue where "think": false would show an error instead of being silently ignored
        1 Reply Last reply
        0
        • Package UpdatesP Offline
          Package UpdatesP Offline
          Package Updates
          App Dev
          wrote last edited by
          #4

          [0.3.0]

          • Fix wrong documentation URL in package info
          1 Reply Last reply
          0
          • Package UpdatesP Offline
            Package UpdatesP Offline
            Package Updates
            App Dev
            wrote last edited by
            #5

            [0.3.1]

            • Update ollama to 0.12.7
            • Full Changelog
            • Qwen3-VL is now available in all parameter sizes ranging from 2B to 235B
            • MiniMax-M2: a 230 Billion parameter model built for coding & agentic workflows available on Ollama's cloud
            • Ollama's new app now includes a way to add one or many files when prompting the model:
            • For better responses, thinking levels can now be adjusted for the gpt-oss models:
            • New API documentation is available for Ollama's API: https://docs.ollama.com/api
            • Model load failures now include more information on Windows
            • Fixed embedding results being incorrect when running embeddinggemma
            • Fixed gemma3n on Vulkan backend
            • Increased time allocated for ROCm to discover devices
            • Fixed truncation error when generating embeddings
            1 Reply Last reply
            0
            • Package UpdatesP Offline
              Package UpdatesP Offline
              Package Updates
              App Dev
              wrote last edited by
              #6

              [0.4.0]

              • Update ollama to 0.12.9
              • Full Changelog
              • Fix performance regression on CPU-only systems
              1 Reply Last reply
              0
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Don't have an account? Register

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • Bookmarks
              • Search