@girish Thanks for looking at this. Just triggered another backup with all apps active and it succeeded. I guess then it's finally time to move to the new hardware (plus making sure that the new system is using the c locale by default)
@girish haha, yeah that's fair. Deleting backups is probably one of those tasks that should intentionally be difficult to do to avoid any issues (i.e. they need to delete the backups manually).
Only thing I can suggest maybe is a little icon designation either beside each of the backups that is past the retention policy (or part of a version upgrade backup) that explains it quickly and links to the docs or something, or maybe just one overall info button in the corner of the backups page or something. Not sure which is most feasible. I'll file a feature request formally for that though in a moment.
@girish no, they're pretty useless. Their web UI S3 console is such crap it can't handle the chatty API requests and keeps timing out. Also I may be wrong that multiple directories are because of failures and restarts. It just looks like multiple changed apps per day get a new dir.
So I am attempting other workarounds. Like creating a new bucket and just nuking the old one.
rsync isn't great for object store backups as it makes a ton of small files.
tgz isn't great as it's a lot of repeated information.
We need something hybrid that is the best of both.
Something like backing up to a local Minio much more quickly then doing an object to object store transfer offsite, which is much more efficient. This may also offer an opportunity to dedupe and further optimize.
The first level is the per app level. Right now, apps are already backed up one by one, but they are not stored nor reported individually. And this is the missing feature in my opinion.
Ah no, they are listed in app -> backups. So, even if you do a full backup, each individual app backup will be listed in app -> backups.
Yes, they are listed at the app level, but there's no reporting at the app level because the backup succeeds or fails at the box level.
You can also use that backup to restore/clone the app.
Yes, but only if the backup succeeds for all apps.
So, again, I think the current issue is that everything is treated as a whole while it makes more sense in my opinion to treat each app individually and then in the end (optionally?) bundle the individual parts as a whole.
Yes, so they are treated individually, it's actually very close to what you want. The only issue is that when a full backup fails, those successful individual app backups that are part of a failed full backup will get removed/cleaned up every night.
Exactly, they are per app bot not treated like that in the end because success or failure is determined by the whole box.
I have made a task to fix the behavior, let's see.
@p44 - Do you find the 2 PM (presumably that's a "peak" time for your server traffic) impacts performance at all?
This is a very good question. Performances could be affected but our customers in that time they are in lunch time, so we prefer to keep an updated copy of backup after "morning work". Backup all data it takes around 30minutes.
@nebulon Ah okay, I think I understand. Does that mean then that we don't really need enough memory for 3x<part size>, so much as 3x(<upload_part_size>x<upload_concurrency>) for rsync setups, right, or did I totally misunderstand?
If a file is big (i.e > than the upload part size), we still need 3x<part size> for rsync. So, let's say you have a 5GB file, then we still need 3GB RAM (since upload part size in your screenshot is 1GB). Plus in addition, we need to memory for uploading 200 files in parallel as well. And if those 200 files contain more 5GB files, then you need add it all up. It gets complicated very quickly!
@girish Thanks! This ended up being a combination of issues I believe. So throttling on the backups due to using rsync, encryption, and file names ended up being too long (only a few) So I've changed some settings around and it seems to have been resolved. I appreciate the help from both @robi and yourself.
@atrilahiji also, just having per-app setting would enable to do eg. Nextcloud using rsync and other apps using tar which I'd really like to be able to do (and has been previously requested and was even included in a "what's coming in Cloudron X" post but then never actually made it in).