That's normal for ERPs, almost all ERPs require full access to their database.
This is mostly due to the Modular idea for modern ERP, which implements different DB for each module like a microservice.
And communicate between with API or Functions Hooks
@girish awesome! I think what would be helpful is to test the parameters before pressing 'restore', just to validate that both the connection and decryption keys are OK.
I found the error messages we have right where more confusing than helpful. I don't really know what I did wrong because to resolve it, I just took a fresh backup and restored it using the same details.
@d19dotca yeah, when files move around, the modules we use for file listing is not very good at handling things. I suspect it will work fine again/tonight. I am reworking the backups code to use a different module since we have had issues with apps like owncast as well which move around files a lot.
@privsec you could disable automatic updates and skip backing up when you update manually, which I definitely wouldn't recommend. Did you create those backups manually? You could try the regular backup view and hit the "Cleanup Backups" button.
The db dumps are atomic (it's a feature of the databases).
What's not atomic is if an app does not make database changes in a transaction. Or if it tracks state outside of the database. For example, if it writes some media file to filesystem and tracks something else in database. Outside of disk snapshots, there is no way to snapshot a database and filesystem at the moment in time. Disk snapshot of a database is not very portable and in most cases on cloud servers disk snapshots of specific paths are not an option.
I used to worry about this a lot back in the day 🙂 But hundreds of thousand of installation later, I can tell you that I have never hit this problem (the atomicity aspect).
@CBCUN any decent hoster can upload with a speed of 10MB/s so for a 600GB backup it should take 16-18 hours. My Cloudron backup is ~80GB TGZ and it takes 35-45 minutes to backup from a Netcup RS to a Hetzner storage box. Either your source/target system is slow as hell or something else is bottlenecking. What method do you use? I found TGZ to be way faster because of the many many small files. If you have larger files, maybe RSYNC is the way to go. A good solution would be a (maybe self hosted) S3 Storage together with RSYNC. I use Minio for one of my backup instances and the backup from Cloudron is blazing fast as most of the work is done by the target machine.
@perler Wow, that is a very good catch!
The kind of; "huh, why have I not thought about this one before, yet it seems obvious"... As you say it, the idea comes from an instant need once in front of a specific situation which requires a solution that no one seems to have a previous need.
And while reading how you describe it, in pops in mind like; "yeah, that would be master piece to be able to restore the files only, in an app as Nextcloud". So true.
There could be others apps as well, but at first thought NC is the very first candidate for such desirable feature, because of the very nature of the app which holds files, and potentially team communications and email boxes as well. And this is where we can see that restoring the whole pack at once, especially for a well active NC app, could produce unexpected outcomes. Depending of course on how the app is used inside an organization.
I don't think it would be a required feature for all apps. But then, if it's easier to implement as a general backup feature then why not. I would vote for this.
There is a bug in the current release that the code crashes when trying to send a notification if a backup failed. This is fixed . I think in the coming releases we can explore more notification options but atleast now you should get an email.
Thanks, I deleted files in the Nextcloud recycle bin. Now, after a reboot, Cloudron System Info shows 4.3 GB for the Nextcloud instance. (I had to delete the nginx config of a stopped app to get the system to reboot.)
Thanks, I'll keep both folders. I have no particular goal to remove Discourse or its backups.
I measured /var/backups size by adding up the three separate folders that FileZilla showed me, but I now understand that some files are hardlinks. And the Backup data shows only 4.87 GB now, so my disk is small enough for me to move on. Thank you!
@scooke I went to each individual app and pointed to the fileshare with the full path "/media/fileshare/2020-07-01-0500/app_backup.tar.gz.enc" and restored those backups.
As for the second part, Azure adds the following "actimeo=30" switch to the cifs share that goes in your /etc/fstab file that causes the connection to time out if the backup takes too long. Once that is removed, reboot the server and the new backups will complete without issues.
The doc actually says "The Backup Cleaner checks if entries in the database exist in the storage backend and removes stale entries from the database automatically." I will fix the progress message to match this.