After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace
-
In the meantime, the problem still persists it seems
root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog* 15G /var/log/syslog 26G /var/log/syslog.1 0 /var/log/syslog.1.gz-2025083120.backup 52K /var/log/syslog.2.gz 4.0K /var/log/syslog.3.gz 4.0K /var/log/syslog.4.gz
Disk graph shows
docker 25.9 GB docker-volumes 7.79 GB /apps.swap 4.29 GB platformdata 3.77 GB boxdata 58.34 MB maildata 233.47 kB Everything else (Ubuntu, etc) 48.67 GB
root@ubuntu-cloudron-16gb-nbg1-3:~# truncate -s 0 /var/log/syslog root@ubuntu-cloudron-16gb-nbg1-3:~# truncate -s 0 /var/log/syslog.1
After truncating the logs (see above), I reclaim the disk space, but I really need to work on a more effective patch / housekeeping job to prevent
This disk contains: docker 25.9 GB docker-volumes 8.02 GB /apps.swap 4.29 GB platformdata 3.8 GB boxdata 57.93 MB maildata 233.47 kB Everything else (Ubuntu, etc) 7.62 GB
I would also love if the Cloudron disk usage view would be a graph like for CPU and Memory. Maybe it's already planned for Cloudron 9, otherwise should I mention that idea in a new thread, @nebulon ?
-
Hello @SansGuidon
You mean the disk usage as a historical statistic and not only a singular point when checking?
If this is what you mean, no that is not part of Cloudron 9 at the moment.
But in my opinion, a very welcome feature request after Cloudron 9 is released! -
Hello @SansGuidon
You mean the disk usage as a historical statistic and not only a singular point when checking?
If this is what you mean, no that is not part of Cloudron 9 at the moment.
But in my opinion, a very welcome feature request after Cloudron 9 is released!@james said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
Hello @SansGuidon
You mean the disk usage as a historical statistic and not only a singular point when checking?
If this is what you mean, no that is not part of Cloudron 9 at the moment.
But in my opinion, a very welcome feature request after Cloudron 9 is released!Exactly, the idea is to be able to notice if something weird is happening (like disk usage growing constantly at a rapid rate)
I'll make a proposal in a separate thread -> Follow up in https://forum.cloudron.io/topic/14292/add-historical-disk-usage-in-system-info-graphs-section -
@SansGuidon afaik, Cloudron does not log anything to syslog . Did you happen to check what was inside that massive syslog file? In one of our production cloudrons (running for almost a decade):
$ du -sh /var/log/syslog* 5.1M /var/log/syslog 6.6M /var/log/syslog.1 800K /var/log/syslog.2.gz 796K /var/log/syslog.3.gz 812K /var/log/syslog.4.gz
-
Hi @joseph
root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog* 8.2G /var/log/syslog 0 /var/log/syslog.1 0 /var/log/syslog.1.gz-2025083120.backup 52K /var/log/syslog.2.gz 4.0K /var/log/syslog.3.gz 4.0K /var/log/syslog.4.gz
As mentioned earlier in the discussion , it's due to sqlite backup dumps of UptimeKuma which end in the wrong place.
root@ubuntu-cloudron-16gb-nbg1-3:~# grep 'INSERT INTO' /var/log/syslog | wc -l 47237303
And I think this was started being investigated by @nebulon
This generates a few GBs worth of waste per day on my Cloudron instance which causes regular outages (every few weeks) -
For now as a workaround I'm applying this patch, please advise if you have any concern with this
diff --git a/box/src/services.js b/box/src/services.js --- a/box/src/services.js +++ b/box/src/services.js @@ -1,7 +1,7 @@ 'use strict'; exports = module.exports = { getServiceConfig, listServices, getServiceStatus, @@ -308,7 +308,7 @@ async function backupSqlite(app, options) { // we use .dump instead of .backup because it's more portable across sqlite versions for (const p of options.paths) { const outputFile = path.join(paths.APPS_DATA_DIR, app.id, path.basename(p, path.extname(p)) + '.sqlite'); // we could use docker exec but it may not work if app is restarting const cmd = `sqlite3 ${p} ".dump"`; const runCmd = `docker run --rm --name=sqlite-${app.id} \ --net cloudron \ -v ${volumeDataDir}:/app/data \ --label isCloudronManaged=true \ - --read-only -v /tmp -v /run ${app.manifest.dockerImage} ${cmd} > ${outputFile}`; + --log-driver=none \ + --read-only -v /tmp -v /run ${app.manifest.dockerImage} ${cmd} > ${outputFile} 2>/dev/null`; await shell.bash(runCmd, { encoding: 'utf8' }); } }
-
Hi @joseph
root@ubuntu-cloudron-16gb-nbg1-3:~# du -sh /var/log/syslog* 8.2G /var/log/syslog 0 /var/log/syslog.1 0 /var/log/syslog.1.gz-2025083120.backup 52K /var/log/syslog.2.gz 4.0K /var/log/syslog.3.gz 4.0K /var/log/syslog.4.gz
As mentioned earlier in the discussion , it's due to sqlite backup dumps of UptimeKuma which end in the wrong place.
root@ubuntu-cloudron-16gb-nbg1-3:~# grep 'INSERT INTO' /var/log/syslog | wc -l 47237303
And I think this was started being investigated by @nebulon
This generates a few GBs worth of waste per day on my Cloudron instance which causes regular outages (every few weeks)@SansGuidon I think @nebulon investigated and could not reproduce. We also run uptime kuma. Our logs are fine. Have you enabled backups inside uptime kuma or something else by any chance?
root@my:~# docker ps | grep uptime cb00714073cb cloudron/louislam.uptimekuma.app:202508221422060000 "/app/pkg/start.sh" 2 weeks ago Up 2 weeks ee6e4628-c370-4713-9cb6-f1888c32f8fb root@my:~# du -sh /var/log/syslog* 352K /var/log/syslog 904K /var/log/syslog.1 116K /var/log/syslog.2.gz 112K /var/log/syslog.3.gz 112K /var/log/syslog.4.gz 108K /var/log/syslog.5.gz 112K /var/log/syslog.6.gz 108K /var/log/syslog.7.gz root@my:~# grep 'INSERT INTO' /var/log/syslog | wc -l 0
-
FWIW, our db is pretty big too.
@SansGuidon the command is just
sqlite3 ${p} ".dump"
and it is redirected to a file. Do you have any ideas of why this will log sql commands to syslog? I can't reproduce this by running the command manually. -
@joseph I don't see any special setting in UptimeKuma being applied in my instance. Can you try to reproduce with those instructions below? Hope that makes sense
Ensure your default logdriver is journald:
systemctl show docker -p ExecStart
Should show something like
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --log-driver=journald --exec-opt native.cgroupdriver=cgroupfs --storage-driver=overlay2 --experimental --ip6tables --use>
Then try to mimic what backupSqlite() does (no log driver; redirect only outside docker run):
docker run --rm alpine sh -lc 'for i in $(seq 1 3); do echo "INSERT INTO t VALUES($i);"; done' > /tmp/out.sql
Observe duplicates got logged to syslog anyway:
grep 'INSERT INTO t VALUES' /var/log/syslog | wc -l # > 0 cat /tmp/out.sql | wc -l # same 3 lines
Now repeat with logging disabled (what the fix does):
docker run --rm --log-driver=none alpine sh -lc 'for i in $(seq 1 3); do echo "INSERT INTO t VALUES($i);"; done' > /tmp/out2.sql grep 'INSERT INTO t VALUES' /var/log/syslog | wc -l # unchanged
-
@SansGuidon thanks for the repro. I have to say I can easily reproduce not only your test but also uptime kuma backup issue on my test Cloudron. At the same time, I have verified that @joseph's observation is also correct - our prod uptime kuma does not produce any spurious logs. Wonder what is going on... I am debugging.
-
@girish Docker 27.3.1 and Ubuntu 24.04.2 LTS
-
@SansGuidon thanks, fixed in https://git.cloudron.io/platform/box/-/commit/e45af9b611f4d0c3b77d4329aac24bacf98e4e6c . I could not figure out why it's not reproducible on that old Cloudron but I can reproduce it everywhere else . Maybe some Ubuntu 20.04 quirk .