After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace
-
So the docker daemon itself using journald via
--log-driver=journaldis correct. Also it is correct that the containers which are managed and started by Cloudron will havesyslogin theLogConfigof theHostConfig. Also it should mention thesyslog-addressbeingunix://home/yellowtent/platformdata/logs/syslog.sockFrom what I can see in your post this all looks correct and as intended.
Thus, none of the docker containers should log to journald or rsyslogd. Well at least if they were created by Cloudron itself of course to set those.
Given that this is uptime kuma, which in turn is just using sqlite, this lead me to https://git.cloudron.io/platform/box/-/blob/master/src/services.js?ref_type=heads#L933 which indeed starts a container without specifying the cloudron logdriver configs. So that is probably one thing we should fix.
This however would still mean the Gbs of sql dump logs just end up in another place. So the main issue then to fix is that
sqlite3 app.db .dumpwhich is run to create the sqldump also somehow logs to stdout/err despite redirectding stdou to the dump file....and that ends up in the logs somehow. I haven't found a fix yet but just to share the investigation here. -
In the meantime, the problem still persists it seems
root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog* 15G /var/log/syslog 26G /var/log/syslog.1 0 /var/log/syslog.1.gz-2025083120.backup 52K /var/log/syslog.2.gz 4.0K /var/log/syslog.3.gz 4.0K /var/log/syslog.4.gzDisk graph shows
docker 25.9 GB docker-volumes 7.79 GB /apps.swap 4.29 GB platformdata 3.77 GB boxdata 58.34 MB maildata 233.47 kB Everything else (Ubuntu, etc) 48.67 GBroot@ubuntu-cloudron-16gb-nbg1-3:~# truncate -s 0 /var/log/syslog root@ubuntu-cloudron-16gb-nbg1-3:~# truncate -s 0 /var/log/syslog.1After truncating the logs (see above), I reclaim the disk space, but I really need to work on a more effective patch / housekeeping job to prevent

This disk contains: docker 25.9 GB docker-volumes 8.02 GB /apps.swap 4.29 GB platformdata 3.8 GB boxdata 57.93 MB maildata 233.47 kB Everything else (Ubuntu, etc) 7.62 GBI would also love if the Cloudron disk usage view would be a graph like for CPU and Memory. Maybe it's already planned for Cloudron 9, otherwise should I mention that idea in a new thread, @nebulon ?
-
Hello @SansGuidon
You mean the disk usage as a historical statistic and not only a singular point when checking?
If this is what you mean, no that is not part of Cloudron 9 at the moment.
But in my opinion, a very welcome feature request after Cloudron 9 is released! -
Hello @SansGuidon
You mean the disk usage as a historical statistic and not only a singular point when checking?
If this is what you mean, no that is not part of Cloudron 9 at the moment.
But in my opinion, a very welcome feature request after Cloudron 9 is released!@james said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
Hello @SansGuidon
You mean the disk usage as a historical statistic and not only a singular point when checking?
If this is what you mean, no that is not part of Cloudron 9 at the moment.
But in my opinion, a very welcome feature request after Cloudron 9 is released!Exactly, the idea is to be able to notice if something weird is happening (like disk usage growing constantly at a rapid rate)
I'll make a proposal in a separate thread -> Follow up in https://forum.cloudron.io/topic/14292/add-historical-disk-usage-in-system-info-graphs-section -
@SansGuidon afaik, Cloudron does not log anything to syslog . Did you happen to check what was inside that massive syslog file? In one of our production cloudrons (running for almost a decade):
$ du -sh /var/log/syslog* 5.1M /var/log/syslog 6.6M /var/log/syslog.1 800K /var/log/syslog.2.gz 796K /var/log/syslog.3.gz 812K /var/log/syslog.4.gz -
Hi @joseph
root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog* 8.2G /var/log/syslog 0 /var/log/syslog.1 0 /var/log/syslog.1.gz-2025083120.backup 52K /var/log/syslog.2.gz 4.0K /var/log/syslog.3.gz 4.0K /var/log/syslog.4.gzAs mentioned earlier in the discussion , it's due to sqlite backup dumps of UptimeKuma which end in the wrong place.
root@ubuntu-cloudron-16gb-nbg1-3:~# grep 'INSERT INTO' /var/log/syslog | wc -l 47237303And I think this was started being investigated by @nebulon
This generates a few GBs worth of waste per day on my Cloudron instance which causes regular outages (every few weeks) -
For now as a workaround I'm applying this patch, please advise if you have any concern with this

diff --git a/box/src/services.js b/box/src/services.js --- a/box/src/services.js +++ b/box/src/services.js @@ -1,7 +1,7 @@ 'use strict'; exports = module.exports = { getServiceConfig, listServices, getServiceStatus, @@ -308,7 +308,7 @@ async function backupSqlite(app, options) { // we use .dump instead of .backup because it's more portable across sqlite versions for (const p of options.paths) { const outputFile = path.join(paths.APPS_DATA_DIR, app.id, path.basename(p, path.extname(p)) + '.sqlite'); // we could use docker exec but it may not work if app is restarting const cmd = `sqlite3 ${p} ".dump"`; const runCmd = `docker run --rm --name=sqlite-${app.id} \ --net cloudron \ -v ${volumeDataDir}:/app/data \ --label isCloudronManaged=true \ - --read-only -v /tmp -v /run ${app.manifest.dockerImage} ${cmd} > ${outputFile}`; + --log-driver=none \ + --read-only -v /tmp -v /run ${app.manifest.dockerImage} ${cmd} > ${outputFile} 2>/dev/null`; await shell.bash(runCmd, { encoding: 'utf8' }); } } -
Hi @joseph
root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog* 8.2G /var/log/syslog 0 /var/log/syslog.1 0 /var/log/syslog.1.gz-2025083120.backup 52K /var/log/syslog.2.gz 4.0K /var/log/syslog.3.gz 4.0K /var/log/syslog.4.gzAs mentioned earlier in the discussion , it's due to sqlite backup dumps of UptimeKuma which end in the wrong place.
root@ubuntu-cloudron-16gb-nbg1-3:~# grep 'INSERT INTO' /var/log/syslog | wc -l 47237303And I think this was started being investigated by @nebulon
This generates a few GBs worth of waste per day on my Cloudron instance which causes regular outages (every few weeks)@SansGuidon I think @nebulon investigated and could not reproduce. We also run uptime kuma. Our logs are fine. Have you enabled backups inside uptime kuma or something else by any chance?
root@my:~# docker ps | grep uptime cb00714073cb cloudron/louislam.uptimekuma.app:202508221422060000 "/app/pkg/start.sh" 2 weeks ago Up 2 weeks ee6e4628-c370-4713-9cb6-f1888c32f8fb root@my:~# du -sh /var/log/syslog* 352K /var/log/syslog 904K /var/log/syslog.1 116K /var/log/syslog.2.gz 112K /var/log/syslog.3.gz 112K /var/log/syslog.4.gz 108K /var/log/syslog.5.gz 112K /var/log/syslog.6.gz 108K /var/log/syslog.7.gz root@my:~# grep 'INSERT INTO' /var/log/syslog | wc -l 0 -
FWIW, our db is pretty big too.

@SansGuidon the command is just
sqlite3 ${p} ".dump"and it is redirected to a file. Do you have any ideas of why this will log sql commands to syslog? I can't reproduce this by running the command manually. -
@joseph I don't see any special setting in UptimeKuma being applied in my instance. Can you try to reproduce with those instructions below? Hope that makes sense
Ensure your default logdriver is journald:
systemctl show docker -p ExecStartShould show something like
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=overlay2 --experimental --ip6tables --use>Then try to mimic what backupSqlite() does (no log driver; redirect only outside docker run):
docker run --rm alpine sh -lc 'for i in $(seq 1 3); do echo "INSERT INTO t VALUES($i);"; done' > /tmp/out.sqlObserve duplicates got logged to syslog anyway:
grep 'INSERT INTO t VALUES' /var/log/syslog | wc -l # > 0 cat /tmp/out.sql | wc -l # same 3 linesNow repeat with logging disabled (what the fix does):
docker run --rm --log-driver=none alpine sh -lc 'for i in $(seq 1 3); do echo "INSERT INTO t VALUES($i);"; done' > /tmp/out2.sql grep 'INSERT INTO t VALUES' /var/log/syslog | wc -l # unchanged -
@SansGuidon thanks for the repro. I have to say I can easily reproduce not only your test but also uptime kuma backup issue on my test Cloudron. At the same time, I have verified that @joseph's observation is also correct - our prod uptime kuma does not produce any spurious logs. Wonder what is going on... I am debugging.
-
@girish Docker 27.3.1 and Ubuntu 24.04.2 LTS
-
@SansGuidon thanks, fixed in https://git.cloudron.io/platform/box/-/commit/e45af9b611f4d0c3b77d4329aac24bacf98e4e6c . I could not figure out why it's not reproducible on that old Cloudron but I can reproduce it everywhere else . Maybe some Ubuntu 20.04 quirk .
-
Nice! thanks @girish

-
@SansGuidon thanks, fixed in https://git.cloudron.io/platform/box/-/commit/e45af9b611f4d0c3b77d4329aac24bacf98e4e6c . I could not figure out why it's not reproducible on that old Cloudron but I can reproduce it everywhere else . Maybe some Ubuntu 20.04 quirk .
@girish said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
@SansGuidon thanks, fixed in https://git.cloudron.io/platform/box/-/commit/e45af9b611f4d0c3b77d4329aac24bacf98e4e6c . I could not figure out why it's not reproducible on that old Cloudron but I can reproduce it everywhere else . Maybe some Ubuntu 20.04 quirk .
Can you guide me what commands to type? Thank you.
-
@james said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
Hello @zohup
This is fixed in Cloudron Version 9.I think what @zohup was asking is how to fix this in production environments which are still running Cloudron Version 8.
-
S SansGuidon referenced this topic
-
Quickfix for users who need it NOW:
# get patch file, apply and remove and restart cloudron-syslog.service cd /home/yellowtent/box wget https://git.cloudron.io/platform/box/-/commit/063b1024616706971d4a1f9c50b5032727640120.diff git apply 063b1024616706971d4a1f9c50b5032727640120.diff rm -v 063b1024616706971d4a1f9c50b5032727640120.diff systemctl restart cloudron-syslog.serviceyes there it is, and it seems like that's the only way to fix it
@SansGuidon said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
@james said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
Hello @zohup
This is fixed in Cloudron Version 9.I think what @zohup was asking is how to fix this in production environments which are still running Cloudron Version 8.
thanks for the quick fix! I applied it and it worked perfectly.

@BrutalBirdie said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:Quickfix for users who need it NOW:
# get patch file, apply and remove and restart cloudron-syslog.service cd /home/yellowtent/box wget https://git.cloudron.io/platform/box/-/commit/063b1024616706971d4a1f9c50b5032727640120.diff git apply 063b1024616706971d4a1f9c50b5032727640120.diff rm -v 063b1024616706971d4a1f9c50b5032727640120.diff systemctl restart cloudron-syslog.servicedu -sh /var/log/syslog* truncate -s 0 /var/log/syslog truncate -s 0 /var/log/syslog.1 -
Thanks @zohup !