That's working too for me by changing the name 
 ! Thanks @james
SansGuidon
Posts
- 
How to configure LibreChat to discuss with secured self hosted Ollama? - 
How to configure LibreChat to discuss with secured self hosted Ollama?Thanks @james for confirming my suspicion. But Ollama is accepting the Header so I guess the problem is on LibreChat, right?
 - 
How to configure LibreChat to discuss with secured self hosted Ollama?Thanks @james
 - 
How to configure LibreChat to discuss with secured self hosted Ollama?correct. I also tested that I can reach the Ollama api hosted on Cloudron using that same Bearer
curl -v https://ollama-api.<REDACTED>/v1/models -H "Authorization: Bearer <REDACTED>"
it does not return anything more than{"object":"list","data":null}useful but at least it's a 200 and a good enough test for me. - 
How to configure LibreChat to discuss with secured self hosted Ollama?Hi
I've installed LibreChat and Ollama on Cloudron and I've followed the checklists for each app.
LibreChat discusses with Mistral without trouble using my API Key.
However I can't make it work with my own instance of Ollama and I don't know what I'm doing wrong.
I get this error from LibreChatSomething went wrong. Here's the specific error message we encountered: An error occurred while processing the request: <html> <head><title>401 Authorization Required</title></head> <body> <center><h1>401 Authorization Required</h1></center> <hr><center>nginx/1.24.0 (Ubuntu)</center> </body> </html>I guess a Bearer is missing, but even if I add it to librechat.yaml configuration, the problem remains.
Any shared experience/example would help, I found the documentation very light about this issue, and the examples I see on how ton configure LibreChat for Ollama in other contexts show various base URLs for Ollama and sometimes use 'ollama' as api key, sometimes provide a Bearer token.
My librechat.yaml config:
version: 1.2.8 endpoints: custom: - name: "Ollama" apiKey: "ollama" baseURL: "https://ollama-api.<REDACTED>/v1" models: default: - "llama3:latest" - "mistral" - "gemma:7b" - "phi3:mini" fetch: false headers: Authorization: 'Bearer ${OLLAMA_BEARER_TOKEN}' titleConvo: true titleModel: "current_model" summarize: false summaryModel: "current_model" forcePrompt: false modelDisplayLabel: "Ollama"Does it look correct? the
OLLAMA_BEARER_TOKENis defined in the .env file.Thanks in advance
 - 
Memory UsageFor anyone facing this issue,
best workaround/mitigation for me were to run Anubis on a separate VPS following a setup similar to https://forum.cloudron.io/topic/13957/deploying-anubis-ai-crawler-filtering-on-a-cloudron-server/8 so all requests to my Gitea instance go through Anubis but for some API/Healthchecks calls (from Uptime Kuma etc). I could even decrease the allocated memory. - 
After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspaceThanks @zohup !
 - 
Cloudron Everything full disk - where to delete ?@zohup Can you also print the result of this command on your Cloudron instance?
du -sh /var/log/syslog*Just in case this is the same problem as posted in https://forum.cloudron.io/topic/13361/after-ubuntu-22-24-upgrade-syslog-getting-spammed-and-grows-way-to-much-clogging-up-the-diskspace/34?_=1760951639695
 - 
After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace@james said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
Hello @zohup
This is fixed in Cloudron Version 9.I think what @zohup was asking is how to fix this in production environments which are still running Cloudron Version 8.
 - 
What's coming in Cloudron 9I believe there is good hope the release would land soon
 ...
I keep track of the status for milestone 9.0 on a daily basis (thanks to changedetection) and yesterday they closed the last tickets attached to milestone 9.0 https://git.cloudron.io/platform/box/-/milestones/84#tab-issues and until there is anything else hidden from the dashboard above, I hope it's for the coming days. But hey, I'm only a customer so of course I want all the best for yesterday xD - 
Changelog format seems brokenGood to know, but it doesn't help to trust the notifications at all. It might be good to add a "generated by AI" at the bottom of those notifications /summaries as a matter of transparency for the users.
 - 
Changelog format seems brokenThanks! LLMs will never be 100% reliable but maybe add some try catch mechanism/retry loop in case the ratio of bullet points in original content and new summary does not make sense like more bullet points in the final summary than the initial. I'm used to work with Mistral and end up always adding validation/ retries mechanisms for every work as LLMs are designed to hallucinate and also Mistral API is not designed to always answer something valuable nor always answer quick or at first try.
 - 
Changelog format seems brokenAs seen in https://forum.cloudron.io/post/112933 and compared to original changelog at https://github.com/syncthing/syncthing/releases/tag/v2.0.10 the changelog in Cloudron just adds useless bullet points and newlines.

Thanks!
 - 
After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspaceNice! thanks @girish

 - 
After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace@girish Docker 27.3.1 and Ubuntu 24.04.2 LTS
 - 
After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace@joseph I don't see any special setting in UptimeKuma being applied in my instance. Can you try to reproduce with those instructions below? Hope that makes sense
Ensure your default logdriver is journald:
systemctl show docker -p ExecStartShould show something like
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=overlay2 --experimental --ip6tables --use>Then try to mimic what backupSqlite() does (no log driver; redirect only outside docker run):
docker run --rm alpine sh -lc 'for i in $(seq 1 3); do echo "INSERT INTO t VALUES($i);"; done' > /tmp/out.sqlObserve duplicates got logged to syslog anyway:
grep 'INSERT INTO t VALUES' /var/log/syslog | wc -l # > 0 cat /tmp/out.sql | wc -l # same 3 linesNow repeat with logging disabled (what the fix does):
docker run --rm --log-driver=none alpine sh -lc 'for i in $(seq 1 3); do echo "INSERT INTO t VALUES($i);"; done' > /tmp/out2.sql grep 'INSERT INTO t VALUES' /var/log/syslog | wc -l # unchanged - 
After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspaceFor now as a workaround I'm applying this patch, please advise if you have any concern with this

diff --git a/box/src/services.js b/box/src/services.js --- a/box/src/services.js +++ b/box/src/services.js @@ -1,7 +1,7 @@ 'use strict'; exports = module.exports = { getServiceConfig, listServices, getServiceStatus, @@ -308,7 +308,7 @@ async function backupSqlite(app, options) { // we use .dump instead of .backup because it's more portable across sqlite versions for (const p of options.paths) { const outputFile = path.join(paths.APPS_DATA_DIR, app.id, path.basename(p, path.extname(p)) + '.sqlite'); // we could use docker exec but it may not work if app is restarting const cmd = `sqlite3 ${p} ".dump"`; const runCmd = `docker run --rm --name=sqlite-${app.id} \ --net cloudron \ -v ${volumeDataDir}:/app/data \ --label isCloudronManaged=true \ - --read-only -v /tmp -v /run ${app.manifest.dockerImage} ${cmd} > ${outputFile}`; + --log-driver=none \ + --read-only -v /tmp -v /run ${app.manifest.dockerImage} ${cmd} > ${outputFile} 2>/dev/null`; await shell.bash(runCmd, { encoding: 'utf8' }); } } - 
After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspaceHi @joseph
root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog* 8.2G /var/log/syslog 0 /var/log/syslog.1 0 /var/log/syslog.1.gz-2025083120.backup 52K /var/log/syslog.2.gz 4.0K /var/log/syslog.3.gz 4.0K /var/log/syslog.4.gzAs mentioned earlier in the discussion , it's due to sqlite backup dumps of UptimeKuma which end in the wrong place.
root@ubuntu-cloudron-16gb-nbg1-3:~# grep 'INSERT INTO' /var/log/syslog | wc -l 47237303And I think this was started being investigated by @nebulon
This generates a few GBs worth of waste per day on my Cloudron instance which causes regular outages (every few weeks) - 
Add historical disk usage in System info - Graphs sectionAs proposed in https://forum.cloudron.io/post/112581, I would love to see how disk usage is evolving in time, like done for CPU and Memory graphs. For now we only capture the disk usage at some instant T, whereas I want to be able to tell if disk usage i growing abnormally.
Maybe for a future release (Cloudron 9.x)?
Thanks!

 - 
After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace@james said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
Hello @SansGuidon
You mean the disk usage as a historical statistic and not only a singular point when checking?
If this is what you mean, no that is not part of Cloudron 9 at the moment.
But in my opinion, a very welcome feature request after Cloudron 9 is released!Exactly, the idea is to be able to notice if something weird is happening (like disk usage growing constantly at a rapid rate)
I'll make a proposal in a separate thread -> Follow up in https://forum.cloudron.io/topic/14292/add-historical-disk-usage-in-system-info-graphs-section