Issues with 60 seconds timeout (unsure if related to Ollama or OpenwebUI)
-
Hi there! I'm playing around with OpenwebUI and Ollama to host a local model and I tried adding longer txt files as "knowledge" and ask questions about it. Adding text to the vector db seems to be working, but when I try to query it via mistral-7b, OpenwebUI stops with "504: Open WebUI: Server Connection Error" and Ollama throws this error, after exactly 60 seconds:
Dec 08 11:53:19 2025/12/08 10:53:19 [error] 33#33: *41243 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 172.18.0.1, server: _, request: "POST /api/chat HTTP/1.1", upstream: "http://0.0.0.0:11434/api/chat", host: "lama.example.com" Dec 08 11:53:19 172.18.0.1 - - [08/Dec/2025:10:53:19 +0000] "POST /api/chat HTTP/1.1" 504 176 "-" "Python/3.12 aiohttp/3.12.15" Dec 08 11:53:19 [GIN] 2025/12/08 - 10:53:19 | 500 | 1m0s | 172.18.18.78 | POST "/api/chat"Since this machine is quite beefy (20 Cores, 64GB RAM) but doesn't have a proper GPU, it would probably work, but just takes a little longer to come up with a reply. I tried various env vars to adjust the timeout, but I'm guessing this comes from the built in nginx. Can we set the timeout to 5 minutes or something?
-
M msbt marked this topic as a question
-
M msbt has marked this topic as solved
-
@nebulon can you also add a line for
client_max_body_size 20Mor something, just got a413: Open WebUI: Server Connection Errorin OpenwebUI where the Ollama logs say[error] 25#25: *7901 client intended to send too large body: 2460421 bytes, client: 172.18.0.1, server: _, request: "POST /api/chat HTTP/1.1", host: "lama.example.com"- apparently the 1MB default is not enough for bigger queries (if that is what is currently set as default).