After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace
-
B BrutalBirdie referenced this topic on
-
The patch described in https://forum.cloudron.io/topic/13361/after-ubuntu-22-24-upgrade-syslog-getting-spammed-and-grows-way-to-much-clogging-up-the-diskspace/11# Is not available anymore (Error 500)
-
The patch described in https://forum.cloudron.io/topic/13361/after-ubuntu-22-24-upgrade-syslog-getting-spammed-and-grows-way-to-much-clogging-up-the-diskspace/11# Is not available anymore (Error 500)
@necrevistonnezr That is because https://git.cloudron.io/ is currently throwing err 500.
Has been resolved, is now all good again.
-
Quickfix for users who need it NOW:
# get patch file, apply and remove and restart cloudron-syslog.service cd /home/yellowtent/box wget https://git.cloudron.io/platform/box/-/commit/063b1024616706971d4a1f9c50b5032727640120.diff git apply 063b1024616706971d4a1f9c50b5032727640120.diff rm -v 063b1024616706971d4a1f9c50b5032727640120.diff systemctl restart cloudron-syslog.service
@BrutalBirdie this is great, solved the issue for me!
-
J james referenced this topic on
-
A alex-a-soto referenced this topic on
-
S SansGuidon referenced this topic
-
FYI I got the same problem a few times in past weeks, I understand this will be solved in Cloudron 9, right? But if yes I'm a bit confused that we need to apply such a patch manually when this could be part of an update. Anyway truncating the syslog + applying the patch got me rid of 60GB of spam in log files.
I'm interested in how others are dealing with this. -
@SansGuidon the issue arises only with the logs of some specific apps it seems. Did you notice which app specifically is growing in log size? Or is it all the app logs? But you are right, this problem is solved only in Cloudron 9.
@girish I don't think I've hit this issue myself, but why not just push out an 8.3.3 with this fix?
-
@SansGuidon the issue arises only with the logs of some specific apps it seems. Did you notice which app specifically is growing in log size? Or is it all the app logs? But you are right, this problem is solved only in Cloudron 9.
@girish said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
@SansGuidon the issue arises only with the logs of some specific apps it seems. Did you notice which app specifically is growing in log size? Or is it all the app logs? But you are right, this problem is solved only in Cloudron 9.
Based on early investigation, some apps like Syncthing and Lamp, or even wallos, generate more logs than the rest. But this is just when looking at the data of past hours, and after applying the diff + logrotate tuning. I'll keep you posted if I find more interesting evidence. If someone has a script to quickly generate relevant stats, I'm interested.
-
@girish I don't think I've hit this issue myself, but why not just push out an 8.3.3 with this fix?
@jdaviescoates Yes, could help, as in current state, the syslog implementation generate errors in my logs, which could explain the logs growing in size. So I had to apply the diff to avoid this repeated pattern
2025-08-31T20:42:40.149390+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:40Z ubuntu-cloudron-16gb-nbg1-3 b5b418fc-0f16-4cde-81a1-1213880c9a10 1123 b5b418fc-0f16-4cde-81a1-1213880c9a10 - IndexError: list index out of range 2025-08-31T20:42:40.240033+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:40Z ubuntu-cloudron-16gb-nbg1-3 cd4a6fed-6fd7-4616-ba0d-d0c38972774b 1123 cd4a6fed-6fd7-4616-ba0d-d0c38972774b - 172.18.0.1 - - [31/Aug/2025:20:42:40 +0000] "GET / HTTP/1.1" 200 45257 "-" "Mozilla (CloudronHealth)" 2025-08-31T20:42:41.676806+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:41Z ubuntu-cloudron-16gb-nbg1-3 mongodb 1123 mongodb - {"t":{"$date":"2025-08-31T20:42:41.675+00:00"},"s":"D1", "c":"REPL", "id":21223, "ctx":"NoopWriter","msg":"Set last known op time","attr":{"lastKnownOpTime":{"ts":{"$timestamp":{"t":1756672961,"i":1}},"t":42}}} 2025-08-31T20:42:43.067695+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:43Z ubuntu-cloudron-16gb-nbg1-3 mongodb 1123 mongodb - {"t":{"$date":"2025-08-31T20:42:43.066+00:00"},"s":"D1", "c":"NETWORK", "id":4668132, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"ReplicaSetMonitor ping success","attr":{"host":"mongodb:27017","replicaSet":"rs0","durationMicros":606}} 2025-08-31T20:42:44.061046+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:44Z ubuntu-cloudron-16gb-nbg1-3 b5b418fc-0f16-4cde-81a1-1213880c9a10 1123 b5b418fc-0f16-4cde-81a1-1213880c9a10 - url = link.split(" : ")[0].split(" ")[1].strip("[]") 2025-08-31T20:42:44.061077+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:44Z ubuntu-cloudron-16gb-nbg1-3 b5b418fc-0f16-4cde-81a1-1213880c9a10 1123 b5b418fc-0f16-4cde-81a1-1213880c9a10 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^ 2025-08-31T20:42:44.061100+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:44Z ubuntu-cloudron-16gb-nbg1-3 b5b418fc-0f16-4cde-81a1-1213880c9a10 1123 b5b418fc-0f16-4cde-81a1-1213880c9a10 - IndexError: list index out of range
-
@jdaviescoates Yes, could help, as in current state, the syslog implementation generate errors in my logs, which could explain the logs growing in size. So I had to apply the diff to avoid this repeated pattern
2025-08-31T20:42:40.149390+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:40Z ubuntu-cloudron-16gb-nbg1-3 b5b418fc-0f16-4cde-81a1-1213880c9a10 1123 b5b418fc-0f16-4cde-81a1-1213880c9a10 - IndexError: list index out of range 2025-08-31T20:42:40.240033+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:40Z ubuntu-cloudron-16gb-nbg1-3 cd4a6fed-6fd7-4616-ba0d-d0c38972774b 1123 cd4a6fed-6fd7-4616-ba0d-d0c38972774b - 172.18.0.1 - - [31/Aug/2025:20:42:40 +0000] "GET / HTTP/1.1" 200 45257 "-" "Mozilla (CloudronHealth)" 2025-08-31T20:42:41.676806+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:41Z ubuntu-cloudron-16gb-nbg1-3 mongodb 1123 mongodb - {"t":{"$date":"2025-08-31T20:42:41.675+00:00"},"s":"D1", "c":"REPL", "id":21223, "ctx":"NoopWriter","msg":"Set last known op time","attr":{"lastKnownOpTime":{"ts":{"$timestamp":{"t":1756672961,"i":1}},"t":42}}} 2025-08-31T20:42:43.067695+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:43Z ubuntu-cloudron-16gb-nbg1-3 mongodb 1123 mongodb - {"t":{"$date":"2025-08-31T20:42:43.066+00:00"},"s":"D1", "c":"NETWORK", "id":4668132, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"ReplicaSetMonitor ping success","attr":{"host":"mongodb:27017","replicaSet":"rs0","durationMicros":606}} 2025-08-31T20:42:44.061046+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:44Z ubuntu-cloudron-16gb-nbg1-3 b5b418fc-0f16-4cde-81a1-1213880c9a10 1123 b5b418fc-0f16-4cde-81a1-1213880c9a10 - url = link.split(" : ")[0].split(" ")[1].strip("[]") 2025-08-31T20:42:44.061077+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:44Z ubuntu-cloudron-16gb-nbg1-3 b5b418fc-0f16-4cde-81a1-1213880c9a10 1123 b5b418fc-0f16-4cde-81a1-1213880c9a10 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^ 2025-08-31T20:42:44.061100+00:00 ubuntu-cloudron-16gb-nbg1-3 syslog.js[970341]: <30>1 2025-08-31T20:42:44Z ubuntu-cloudron-16gb-nbg1-3 b5b418fc-0f16-4cde-81a1-1213880c9a10 1123 b5b418fc-0f16-4cde-81a1-1213880c9a10 - IndexError: list index out of range
@SansGuidon I'm using Syncthing. I've not hit this issue in that my disk space isn't running out - but perhaps that just because I've got quite a big disk and that I recently cleaned up a load of Nextcloud stuff to give me lots more space because my disk was running out!
Where do I look to check if this issue is indeed affecting me after all? Thanks
-
From a deeper investigation, Syslog is exploding (GBs/day) because Cloudron’s backup job dumps full SQLite DBs (e.g. Kuma’s heartbeat table) to stdout, which gets swallowed by journald/rsyslog. One backup ran = ~500MB of SQL spam in syslog in my case. Four runs/day = 2GB+/day, at least. but it could be more depending on the setup. I just triggered a backup now and it grew by almost 2GB.
root@ubuntu-cloudron-16gb-nbg1-3:~# grep -nE "CREATE TABLE \[heartbeat\]|INSERT INTO heartbeat|BEGIN TRANSACTION" /var/log/syslog | head -10 1152:2025-08-31T21:00:37.705303+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: BEGIN TRANSACTION; 1153:2025-08-31T21:00:37.705386+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: CREATE TABLE [heartbeat](#015 1162:2025-08-31T21:00:37.705789+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(1,1,1,1,'200 - OK','2025-03-27 23:26:53.602',566,0,0); 1163:2025-08-31T21:00:37.705828+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(2,0,1,1,'200 - OK','2025-03-27 23:27:54.295',167,60,0); 1164:2025-08-31T21:00:37.705864+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(3,0,1,1,'200 - OK','2025-03-27 23:28:54.506',247,60,0); 1165:2025-08-31T21:00:37.705930+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(4,0,1,1,'200 - OK','2025-03-27 23:29:54.801',441,60,0); 1166:2025-08-31T21:00:37.705973+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(5,0,1,1,'200 - OK','2025-03-27 23:30:55.259',200,60,0); 1167:2025-08-31T21:00:37.706010+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(6,0,1,1,'200 - OK','2025-03-27 23:31:55.486',162,60,0); 1168:2025-08-31T21:00:37.706033+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(7,0,1,1,'200 - OK','2025-03-27 23:32:55.691',161,60,0); 1169:2025-08-31T21:00:37.706057+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(8,0,1,1,'200 - OK','2025-03-27 23:33:55.899',129,60,0);
I'm interested to know if someone can validate this observation on another Cloudron instance ideally with an existing and long running Kuma instance:
Reproduction path
- Install Uptime Kuma on Cloudron
- Trigger a backup
- Watch /var/log/syslog: you’ll see CREATE TABLE heartbeat + endless INSERT lines
Root Cause
Backup script calls sqlite3 .dump → stdout → journald → rsyslog → syslog file. Logging pipelines aren’t designed for multi-hundred MB database dumps.Impact
- /var/log/syslog bloats to multi-GB
- Disk space wasted, logrotate churn
- Actual logs are drowned in noise
Fix?
- Don’t stream .dump to stdout. Redirect to file, or use .backup. Silence the dump in logs?
-
From a deeper investigation, Syslog is exploding (GBs/day) because Cloudron’s backup job dumps full SQLite DBs (e.g. Kuma’s heartbeat table) to stdout, which gets swallowed by journald/rsyslog. One backup ran = ~500MB of SQL spam in syslog in my case. Four runs/day = 2GB+/day, at least. but it could be more depending on the setup. I just triggered a backup now and it grew by almost 2GB.
root@ubuntu-cloudron-16gb-nbg1-3:~# grep -nE "CREATE TABLE \[heartbeat\]|INSERT INTO heartbeat|BEGIN TRANSACTION" /var/log/syslog | head -10 1152:2025-08-31T21:00:37.705303+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: BEGIN TRANSACTION; 1153:2025-08-31T21:00:37.705386+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: CREATE TABLE [heartbeat](#015 1162:2025-08-31T21:00:37.705789+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(1,1,1,1,'200 - OK','2025-03-27 23:26:53.602',566,0,0); 1163:2025-08-31T21:00:37.705828+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(2,0,1,1,'200 - OK','2025-03-27 23:27:54.295',167,60,0); 1164:2025-08-31T21:00:37.705864+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(3,0,1,1,'200 - OK','2025-03-27 23:28:54.506',247,60,0); 1165:2025-08-31T21:00:37.705930+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(4,0,1,1,'200 - OK','2025-03-27 23:29:54.801',441,60,0); 1166:2025-08-31T21:00:37.705973+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(5,0,1,1,'200 - OK','2025-03-27 23:30:55.259',200,60,0); 1167:2025-08-31T21:00:37.706010+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(6,0,1,1,'200 - OK','2025-03-27 23:31:55.486',162,60,0); 1168:2025-08-31T21:00:37.706033+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(7,0,1,1,'200 - OK','2025-03-27 23:32:55.691',161,60,0); 1169:2025-08-31T21:00:37.706057+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(8,0,1,1,'200 - OK','2025-03-27 23:33:55.899',129,60,0);
I'm interested to know if someone can validate this observation on another Cloudron instance ideally with an existing and long running Kuma instance:
Reproduction path
- Install Uptime Kuma on Cloudron
- Trigger a backup
- Watch /var/log/syslog: you’ll see CREATE TABLE heartbeat + endless INSERT lines
Root Cause
Backup script calls sqlite3 .dump → stdout → journald → rsyslog → syslog file. Logging pipelines aren’t designed for multi-hundred MB database dumps.Impact
- /var/log/syslog bloats to multi-GB
- Disk space wasted, logrotate churn
- Actual logs are drowned in noise
Fix?
- Don’t stream .dump to stdout. Redirect to file, or use .backup. Silence the dump in logs?
@SansGuidon good sleuthing. I don't currently have an instance of Uptime Kuma running so can't assist but hopefully others can.
-
That is some good investigation indeed. I tried to reproduce this, but given that Cloudron isn't using syslog as such at all, I am not sure how to reproduce this and what makes it log to syslog in your case. But maybe I am missing something obvious or have you somehow adjusted the docker configs around logging on that instance?
-
I've no idea, my setup seems to use journald which could be a default and root cause of such issues
root@ubuntu-cloudron-16gb-nbg1-3:~# docker info | grep 'Logging Driver' Logging Driver: journald
am I alone with this setup? I've no memory about configuring this behavior for logging driver.
-
I've no idea, my setup seems to use journald which could be a default and root cause of such issues
root@ubuntu-cloudron-16gb-nbg1-3:~# docker info | grep 'Logging Driver' Logging Driver: journald
am I alone with this setup? I've no memory about configuring this behavior for logging driver.
@SansGuidon said in After Ubuntu 22/24 Upgrade syslog getting spammed and grows way to much clogging up the diskspace:
I alone with this setup?
Nope. I see to have the same:
root@Ubuntu-2204-jammy-amd64-base ~ # docker info | grep 'Logging Driver' Logging Driver: journald
-
Ah no that is correct. Sorry what I meant is, that Cloudron task or app related logs should not show up in default syslog as such, like when you would run
journalctl -f
However you should have acloudron-syslog
daemon running. Check withsystemctl status cloudron-syslog
That one would dump corresponding logs into the correct places in
/home/yellowtent/paltformdata/logs/...
So still I am curious how it ends up in
/var/log/syslog
and then why it would log db dump data there. -
Thanks for your feedback, @nebulon
I'm not sure why, but Cloudron created my app containers with Docker’s syslog log driver. Those containers write their stdout/stderr straight into the host’s rsyslog, which in turn writes to /var/log/syslog.
So when an app (Uptime Kuma in my case) runs a huge sqlite3 .dump during a Cloudron task/backup, that dump goes to stdout → syslog → /var/log/syslog, ballooning the file by GBs. This is not journald forwarding (it’s disabled). Cloudron’s own cloudron-syslog also logs per-app to /home/yellowtent/platformdata/logs/…, so right now there’s duplication.I’m not looking for a local workaround; I’d like Cloudron to confirm the intent here and provide a platform fix.
Below, the findings and some questions/proposals to pursue
Dockerd default vs. container reality
systemctl show docker -p ExecStart # ... --log-driver=journald ...
docker ps -a -q | xargs -r -I{} docker inspect {} \ --format '{{.Name}} {{.HostConfig.LogConfig.Type}}' | sort -u # ~80 containers → all: syslog
The daemon default is journald, but all existing containers are syslog (likely from when they were created).
Not journald → syslog; it’s Docker → rsyslog
grep -n 'ForwardToSyslog' /etc/systemd/journald.conf # ForwardToSyslog=no
journald isn’t forwarding.
Rsyslog is writing everything to /var/log/syslog
grep -nH . /etc/rsyslog.d/50-default.conf | sed -n '8,12p' # *.*;auth,authpriv.none -/var/log/syslog
Cloudron syslog collector is active (so we have duplicate paths)
systemctl status cloudron-syslog # active (running) ls /home/yellowtent/platformdata/logs/ # per-app log dirs + syslog.sock present
The big spill: SQL dump text in logs exactly at backup window
root@ubuntu-cloudron-16gb-nbg1-3:~# grep -nE 'BEGIN TRANSACTION|CREATE TABLE \[heartbeat\]|INSERT INTO heartbeat' /var/log/syslog | head -3 1152:2025-08-31T21:00:37.705303+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: BEGIN TRANSACTION; 1153:2025-08-31T21:00:37.705386+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: CREATE TABLE [heartbeat](#015 1162:2025-08-31T21:00:37.705789+00:00 ubuntu-cloudron-16gb-nbg1-3 d6750120460b[1123]: INSERT INTO heartbeat VALUES(1,1,1,1,'200 - OK','2025-03-27 23:26:53.602',566,0,0);
And Cloudron task timeline around the same minute:
root@ubuntu-cloudron-16gb-nbg1-3:~# grep -n '2025-08-31T21:0' /home/yellowtent/platformdata/logs/box.log | sed -n '1,40p' 9200:2025-08-31T21:00:00.014Z box:janitor Cleaning up expired tokens 9201:2025-08-31T21:00:00.016Z box:eventlog cleanup: pruning events. creationTime: Mon Jun 02 2025 21:00:00 GMT+0000 (Coordinated Universal Time) 9202:2025-08-31T21:00:00.054Z box:locks write: current locks: {"backup_task":null} 9203:2025-08-31T21:00:00.054Z box:locks acquire: backup_task 9204:2025-08-31T21:00:00.054Z box:janitor Cleaned up 0 expired tokens 9205:2025-08-31T21:00:00.166Z box:tasks startTask - starting task 7053 with options {"timeout":86400000,"nice":15,"memoryLimit":1024,"oomScoreAdjust":-999}. logs at /home/yellowtent/platformdata/logs/tasks/7053.log 9206:2025-08-31T21:00:00.168Z box:shell tasks /usr/bin/sudo -S -E /home/yellowtent/box/src/scripts/starttask.sh 7053 /home/yellowtent/platformdata/logs/tasks/7053.log 15 1024 -999 9207:2025-08-31T21:00:00.249Z box:shell Running as unit: box-task-7053.service; invocation ID: fa4cf334a41b43fc9e06d6612bf5a9c1 9209:2025-08-31T21:00:00.395Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9210:2025-08-31T21:00:10.288Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9211:2025-08-31T21:00:20.321Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9212:2025-08-31T21:00:30.367Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9213:2025-08-31T21:00:40.579Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9214:2025-08-31T21:00:50.457Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9215:2025-08-31T21:01:00.455Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9216:2025-08-31T21:01:10.350Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9217:2025-08-31T21:01:20.413Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9218:2025-08-31T21:01:30.407Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9219:2025-08-31T21:01:40.367Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9220:2025-08-31T21:01:50.352Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9221:2025-08-31T21:02:00.390Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9222:2025-08-31T21:02:10.709Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9223:2025-08-31T21:02:11.024Z box:shell system: swapon --noheadings --raw --bytes --show=type,size,used,name 9224:2025-08-31T21:02:20.338Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9225:2025-08-31T21:02:30.311Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9226:2025-08-31T21:02:40.300Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9227:2025-08-31T21:02:50.308Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9228:2025-08-31T21:03:00.406Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9229:2025-08-31T21:03:10.269Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9230:2025-08-31T21:03:20.363Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9231:2025-08-31T21:03:30.265Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9232:2025-08-31T21:03:40.281Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9233:2025-08-31T21:03:50.312Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9234:2025-08-31T21:04:00.321Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9235:2025-08-31T21:04:10.284Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9236:2025-08-31T21:04:20.357Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9237:2025-08-31T21:04:30.242Z box:apphealthmonitor app health: 31 running / 0 stopped / 0 unresponsive 9238:2025-08-31T21:04:30.281Z box:shell Finished with result: success 9245:2025-08-31T21:04:30.288Z box:shell Service box-task-7053 finished with exit code 0 9247:2025-08-31T21:04:30.289Z box:tasks startTask: 7053 completed with code 0
Questions / Suggestions
- Is syslog the intended log driver for app containers?
Dockerd on my host now runs with --log-driver=journald, but all app containers remain on syslog unless re-created. - Platform-level fix proposals (any/all):
- Migrate app containers to journald on updates/repairs so they inherit the daemon default (no /var/log/syslog involvement).
- Ensure task/backup helpers don’t emit large dumps to stdout (redirect to files/pipes consumed by cloudron-syslog, not rsyslog).
- Ship an rsyslog drop-in that stops Docker-originated container stdout from landing in /var/log/syslog, since Cloudron already captures per-app logs under /home/yellowtent/platformdata/logs/.
This would prevent another GB-scale blow-up when an app emits a lot to stdout during backups or maintenance.
What do you think, @nebulon ?
Thanks in advance - Is syslog the intended log driver for app containers?