@girish This is still a huge problem.
My production server have failing applications again due to disk space filling up. Luckily DigitalOcean's backup functionality saves my setup this time again. If I had relied only on Cloudron for backups it would have been disaster time.
I am not asking you for help to fix it or to blame anyone. But this needs an engineered solution by upstream, you guys.
You could for instance recommend that we make use of separate storage volumes on the system drive. This brings the TCO cost down, for storage. Recommendations and verified testing from you would be valuable for us as customers.
I can imagine hundreds of other customers of yours that are seeing the same issue.
Also:
Your assertion above that disk space is cheap is true for physical drive storage, but not for VPS server storage. For me to double disk space from 80gb to 160gb it also doubles my yearly VPS cost. I would believe a sizable portion of your users are hosting Cloudron on a VPS.
Of course you could temporarily invalidate this problem by recommending 160GB storage capacity. This might alienate some potential users.
Now onwards to repair my Cloudron install and apps!
edit:
Solving the root cause
Used ncdu to browse every container
- Gained 1gb of storage by deleting /usr/local/share/.cache/yarn/ on a container volume
- Gained 500mb of storage by deleting Anaconda distribution package cache within a container volume
Analysis: There seems to be space wasteful ways of letting the Metor spread around old versions of libraries and builds (?).