I delete the dump file, then decide to backup from that's date - restore will fail?
No, a copy a the postgresql file is in the backup. The file at /home/yellowtent/platformdata/<appid>/postgresqldump is a temporary file. When we want to backup an app, we create a dump from postgresql and save it to that file. Then we upload that file to wherever the backup is located . After the backup is done, that file has no real use but we just keep it around because in some rare cases where you don't have backups it can help in recovery.
So, to summarize:
Safe to delete /home/yellowtent/appsdata/app-container-id/postgresqldump . But this will be created as part of the backup process, so it's going to be a pain. You have to delete this file everyday.
Your final backup location will have a copy of the above file. Under <timestamp>/appid/postgresqldump . This is your backup, don't delete this file!
@jodumont yes. Samba is the only protocol on your list that is really made to be mounted locally. Personally I would try to avoid having to rely on php for access to remote filesystems (regardless of the protocol).
@girish thank you for the insight. If I am extending the existing drive then I'll make sure to keep it SSD as it already is, I may try HDD if going the first route though. I'll check, this isn't an easy task for extending the existing disk unfortunately, but this may be the better move. Of course the drawback there I guess is I'll be paying for way too much disk space and need to do it all over again if I for example lose one of the clients that are using so much email space (as email is the single biggest consumer on my server right now, by a lot). I'll have to run some tests. 🙂
Yeah, I was hoping to get a picture of the storage used by a user or usergroup across all apps on Cloudron. Apart from companies or single user installs of Cloudron, the thinking was that enabling groups of people to share a server and switch to self hosted open source
software would involve having insight into the main variable resource - the storage - in order for the cost sharing to be transparent.
I understand that's it's not feasible to implement, so will need to think of other community models for resource sharing. Mounting a users own NFS storage is an option, but Cloudron apps are restricted to a single storage location per app instance. Perhaps users mounting their own external storage in an app like Nextcloud is an option.
Any thoughts on a clean model for this kind of resource sharing scenario? This seems to me an important consideration for "regular" users of Cloudron, who might want to get together in order to make switching over from Google etc. financially viable.
@smilebasti The /home/yellowtent/appsdata is the location of apps. This size seems to roughly match the nextcloud size. As for docker, you should not use du tools inside docker's image directories since they are overlays and the du tool is not smart enough to figure out the size correctly. Try docker system df to get a better idea about the actual size docker uses (this is what is reported in the graph as ~5GB). The volumes also link into appsdata so they might be double counted the du tools.
To take a wild guess, maybe you were backing up to the file system for some time before you moved to NAS via SMB? If this was the case, then you should remove the old backups manually from /var/backups. You can just safely nuke all the timestamped directories and the snapshot directory inside it.
@girish Ah interesting, yes that definitely seems like an OVH-specific issue then. At least they're heading in the right direction I guess towards the end of this year. 🙂 Thanks for shedding some light on that one for me.
@girish Oops, I realized just now, that I didn't reply to your question yet. What I changed and maybe has to be protected from being accidentally overwritten is the 'datadirectory' option in /app/data/config/config.php. I changed that path from the default path to '/media/mymountpoint'.
I don't remember if I had Nextcloud in maintenance mode or put the app in recovery mode, while I changed that path and moved the files. But it was either of them. However, I had some duplicate storage paths in the database afterwards and I manually updated the oc_storages table as described here and did a files:scan afterwards. But that should not be related to what the code for updates overwrites, I guess.
For future reference, you have to use resize2fs to grow the ext4 partition when you expand the disk size. This is atleast the case for most public cloud block storage and is a manual operation. Cloudron could potentially do this but I am guessing there is a good reason to not do this automatically, otherwise public cloud folks would have done this already 🙂
Thank you @nebulon for solving this issue off-forums.
For future reference: problem was caused by the external storage being unmounted after a reboot. Remounting the storage drive (I use a netcup VPS) and then repairing in the Cloudron dashboard solved the problem. Including the mounting in /etc/fstab solved this issue for following updates/reboots.