server down: apps not restarting
It could also be you are storing backups locally. You can check in the backup tab yourcloudron.com/#/backups. If so, you will have to delete those somehow. The one line in your error message certainly points to the main culprit:
no space left on device. You need to figure out what's using up the space.
/tmp & /dev/pts are are pseudofilesystems and are not managed via fstab.
they are too smale.
mkdir /tmp/pty465273103: no space left on device: unknown
Have you checked if /tmp is mounted correctly and is writable? It should appear in df -h even if it is a pseudo-filesystem. Since you provided no informations it's hard to help you. Please note that support time is expensive and Cloudrons support only covers problems that are directly caused by cloudron. In addition, time spent on support cannot be used for development, so it is in our best interest to help you here.
/tmp is not a tempfs, it's on root, and GBs free.
it seems to have to do with cgroups and the space within the containers.
when the system CTs run and one app, then its exhausted.
I tried an older kernel, same.
thx everybody for trying to help.
I think thats a pure cloudron/system/cgroup pbl. as I haven't touched that system.
and never came around that on my various other docker projects/server.
If that command doesn't work, put our ssh keys in your
@chymian-0 From what I can tell tell, there is inode exhaustion in the rootfs. If you do,
df -iit tells you that you have run out of inodes. I think this is because this is run on top of btrfs. btrfs is notorious for this. We used to use btrfs on Cloudron 2-3 years ago and gave up because it's just some issue or the other like this. You can to do
btrfs balancefrom outside the cloudron to free up some space, but I am not a btrfs expert.
kudos to @girish
he found the real pbl. (out of i-nodes) within minutes.
from there, we could nail down the cause, by following this:
one cannot raise i-nodes after fs creation. normally, a tar from rootfs, reformat the rootfs, and restore would be necessary.
but to find out, who is consuming all the inodes, one can do the following:
du -s --inodes * 2>/dev/null |sort -gthen cd into the last dir in output and repeat.
Full Disclosure: not all OS's support --inodes flag for du command (my Mac OS does not) but many Linux OS's do.
one has to cd into the dir with the most i-nodes, recursively going down the tree and finally find the dir with the biggest i-node consumption.
in this case, as girish had mentioned, it was caused by not right configured
nullmailer, writing tons of error-msg to
/var/spool/nullmailer/faileduseing 4.4M i-nodes…
deleting that dir eased the situation ad hoc.
rebooting the server and restart all failed apps (GUI & CLI) fixed it.
thanks for all your help