Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. OpenWebUI
  3. Issues with 60 seconds timeout (unsure if related to Ollama or OpenwebUI)

Issues with 60 seconds timeout (unsure if related to Ollama or OpenwebUI)

Scheduled Pinned Locked Moved Solved OpenWebUI
ollamatimeout
6 Posts 2 Posters 27 Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M Offline
    M Offline
    msbt
    App Dev
    wrote last edited by msbt
    #1

    Hi there! I'm playing around with OpenwebUI and Ollama to host a local model and I tried adding longer txt files as "knowledge" and ask questions about it. Adding text to the vector db seems to be working, but when I try to query it via mistral-7b, OpenwebUI stops with "504: Open WebUI: Server Connection Error" and Ollama throws this error, after exactly 60 seconds:

    Dec 08 11:53:19 2025/12/08 10:53:19 [error] 33#33: *41243 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 172.18.0.1, server: _, request: "POST /api/chat HTTP/1.1", upstream: "http://0.0.0.0:11434/api/chat", host: "lama.example.com"
    Dec 08 11:53:19 172.18.0.1 - - [08/Dec/2025:10:53:19 +0000] "POST /api/chat HTTP/1.1" 504 176 "-" "Python/3.12 aiohttp/3.12.15"
    Dec 08 11:53:19 [GIN] 2025/12/08 - 10:53:19 | 500 | 1m0s | 172.18.18.78 | POST "/api/chat"
    

    Since this machine is quite beefy (20 Cores, 64GB RAM) but doesn't have a proper GPU, it would probably work, but just takes a little longer to come up with a reply. I tried various env vars to adjust the timeout, but I'm guessing this comes from the built in nginx. Can we set the timeout to 5 minutes or something?

    Hosting & Web Development

    1 Reply Last reply
    0
    • nebulonN Away
      nebulonN Away
      nebulon
      Staff
      wrote last edited by
      #3

      I was able to reproduce this and we have released a new ollama package which increases the proxy read timeout to 1h. We tried to disable the timeout altogether but nginx wants some value...we can increase that more if need be.

      1 Reply Last reply
      2
      • nebulonN Away
        nebulonN Away
        nebulon
        Staff
        wrote last edited by
        #2

        Indeed seems like the nginx from within the ollama package is timing out. Trying to reproduce this to provide a fix.

        1 Reply Last reply
        1
        • nebulonN Away
          nebulonN Away
          nebulon
          Staff
          wrote last edited by
          #3

          I was able to reproduce this and we have released a new ollama package which increases the proxy read timeout to 1h. We tried to disable the timeout altogether but nginx wants some value...we can increase that more if need be.

          1 Reply Last reply
          2
          • M Offline
            M Offline
            msbt
            App Dev
            wrote last edited by
            #4

            @nebulon much obliged, works like a charm now!

            Hosting & Web Development

            1 Reply Last reply
            1
            • M msbt marked this topic as a question
            • M msbt has marked this topic as solved
            • M Offline
              M Offline
              msbt
              App Dev
              wrote last edited by msbt
              #5

              @nebulon can you also add a line for client_max_body_size 20M or something, just got a 413: Open WebUI: Server Connection Error in OpenwebUI where the Ollama logs say [error] 25#25: *7901 client intended to send too large body: 2460421 bytes, client: 172.18.0.1, server: _, request: "POST /api/chat HTTP/1.1", host: "lama.example.com" - apparently the 1MB default is not enough for bigger queries (if that is what is currently set as default).

              Hosting & Web Development

              1 Reply Last reply
              0
              • nebulonN Away
                nebulonN Away
                nebulon
                Staff
                wrote last edited by
                #6

                the new package now disables the max body check for the nginx within the app. I couldn't quite reproduce the issue in my setup, not sure how you ended up with such big queries, but hopefully it fixes your case.

                1 Reply Last reply
                1
                Reply
                • Reply as topic
                Log in to reply
                • Oldest to Newest
                • Newest to Oldest
                • Most Votes


                • Login

                • Don't have an account? Register

                • Login or register to search.
                • First post
                  Last post
                0
                • Categories
                • Recent
                • Tags
                • Popular
                • Bookmarks
                • Search