yes @james I have also thought of using that. but manually running it on all mailboxes sounds like a pain I don't want to go through.
I guess the current way I envision it is writing a script for imapsync that uses the api of cloudron to get all mailboxes and impersonates the users of the mailboxes to auto run imapsync on them.
but then again maybe a block of the port is the way to go, so I don't have to write that script. the whole backup and recover process should be done quite quickly anyways.
I am wondering though if this is not something others have gone through as well?
@nebulon said in Wekan backup errors blocking my Cloudron backups from completing.:
This indicates, that the mongodb instance/service could not finish creating the database dump for wekan.
It's odd because if I run a backup from the app it completes in a few seconds. There is very little data to backup.
@nebulon said in Wekan backup errors blocking my Cloudron backups from completing.:
Maybe it ran out of memory
Unlikely given it's got over 4GB and I don't think I've ever seen it hit 10% usage.
@nebulon said in Wekan backup errors blocking my Cloudron backups from completing.:
Do you see any crash or more info in the actual mongodb service logs?
I think I've looked and not seen anything obvious but I'll look again...
Did you look at the logs I shared previously?
said in Wekan backup errors blocking my Cloudron backups from completing.:
Here are some logs:
https://paste.uniteddiversity.coop/?7aed63d3ef7af3ba#BiicSooyTEjs2oEY1xQoGtizyB2Lwk7C6UWgCoBZJKjP
Perhaps it could be something to do with this:
vm.max_map_count is too low
currentValue: 65530, recommendedMinimum: 1677720
@makemrproper yeah, and actually it seems I might've been right afterall! I was sure I'd read something about the size... so I just search that thread for size and it is indeed there:
Backup integrity - store size and checksum of backups.
If they've got the size stored then should be easy to display it somewhere too!
@leemuljadi I forked your question here
https://forum.cloudron.io/topic/13846/backup-restore-error-route-unavailable-post-activation
Since this is now another question and issue than before.
For the sake of keeping topics and solutions clean, please contiune in the new topic.
We discussed this problem here, I had this problem just few days ago:
https://forum.cloudron.io/topic/12632/tarextract-pipeline-error-invalid-tar-header/
@nebulon I appreciate your advice. The backup server is not on the same location as the cloudron server. It isn't even on the same country (like I do with all my backups).
Maybe I wasn't clear: the server has an OS disk (nvme) and a HDD for backups. The mount I showed is the HDD disk mount. It was just to check if any filesystem mount options were needed to improve performance.
Thank you all.
Thanks to everyone for the responses. I fully understand the backups now and have reported to my board that it's indeed production ready for our organization.
Using tgz instead of rsync did let the backup run normally, at least once right now. Let's call this a workaround for now, I'll see if this works regularly.
Looks like Cloudflare is interfering here and changing the data in transit? You would have to contact Cloudflare about this, but I have no high hopes that this would get anywhere.
For the first one, I guess you are suggesting a real rsync between the existing files and the remote. I think the issue is that rsync itself relies on a remote rsync service. Cloudron's rsync is strictly speaking not rsync, more like "individual files are synced as opposed to tarball". Even with sshfs, one cannot know for sure if there is a remote rsync service running.
The second one looks like it can be fixed . I will should split this post though
@shrey there's some cases where the backup code forgets about older backups. For example, if you switch backup providers (even temporarily). We are trying to improve the whole situation in 9.0 . For a start, with multiple providers, we can track/cleanup more accurately.
BTW, if you paste the filename of the backup in your Eventlog, does it find it there? I will look into making this more clearer i.e what is getting deleted and why not getting deleted.