Improve Clone/Backup/Restore Speed
-
Looking at the progress of a clone from a backup on a remote S3 share, it seems to restore individual files, one by one, which takes hours for larger apps.
Can the operation be restructured to quickly pull the full backup locally and then copy the files to the container much faster?
-
@robi This likely has to do with the remote-nature of your backups. Using a mounted disk allows cloning to be super quick. I know there was room for improvement before and changes later made to allow for faster backups, but not certain if that impacts the restores too.
-
@girish currently it's set to rsync and default concurrency of 10.
Increasing concurrency isn't as useful when all the files are fetched one by one. It's too many requests.
Object storage should NOT be treated like a filesystem because it isn't one.
The large object should be requested in its entirety.
This is something I worked on at IBM as few people on the planet understood this concept.
As for a clone, it would be much faster now to install a new copy of the same app, and apply the few deltas. But that's not what's happening.
How can we speed things up?
-
@robi said in Improve Clone Speed:
The large object should be requested in its entirety.
Mmm, I am not following. What do you mean by large object? If the files are stored separately as rsync does, they have to be fetched separately as well.
As for a clone, it would be much faster now to install a new copy of the same app, and apply the few deltas. But that's not what's happening.
That's honestly very complex (might even be impossible) since you have to delta databases, caches etc.
-
@girish said in Improve Clone Speed:
Mmm, I am not following. What do you mean by large object? If the files are stored separately as rsync does, they have to be fetched separately as well.
Then I am definitely using the wrong method.
As for a clone, it would be much faster now to install a new copy of the same app, and apply the few deltas. But that's not what's happening.
That's honestly very complex (might even be impossible) since you have to delta databases, caches etc.
For a clone, it would be simpler to duplicate the database/caches, etc over the default install and not compare anything.
-
@robi said in Improve Clone Speed:
For a clone, it would be simpler to duplicate the database/caches, etc over the default install and not compare anything.
Yes, good point And that's probably the solution to make things faster.. Currently, our clone system is tied to a backup. So, this feature request is "clone a live app" (i.e skip this backup stuff altogether).
-
@girish are you aware of the incremental feature of tar ?
This feature is provided by
tar
via an argument-listed-incremental=snapshot-file
where a "snapshot-file" is a special file maintained by thetar
command to determine the files that are been added,modified or deleted.That will speed things up.
-
More info here:
https://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.htmlAnd here:
https://serverfault.com/a/569668But perhaps what we really want is Borg?
https://borgbackup.readthedocs.io/en/stable/It's recommended in that Server Fault post, but also when I asked Hetzner about backups I can download (you can't download backup from their native backup solution) to store off-Hetzer, they suggested I use it, here is their guide:
https://community.hetzner.com/tutorials/install-and-configure-borgbackup
But that's ^ all a bit too technical for me. I use Cloudron so I don't have to bother with all that. So I'd love to just have Borg as a backup option on Cloudron
-
@Lonk said in Improve Clone/Backup/Restore Speed:
Incremental backups. I thought we had those already, but if we don't - we should vote to support them.
We do have them with the rsync option.
-
Actually, rsync and rclone-ing the incremental backups - encrypted - to Onedrive has been very reliable for me the last years. rclone is such a fantastic tool.
-
@jdaviescoates I'm using
.tgz
with Backblaze right now since tgz was the default I didn't look into it. Should I switch to rsync for the benefit of incremental changes or are there cons like @robi is trying to solve (his cloning / restoring speed suggestions).Is "Incremental TAR files" the best of both worlds, basically?
-
@Lonk If you use rsync, use Wasabi as it has no ingress costs. Also, in Backblaze, check the lifecycle settings on all buckets to make sure you're not paying to insure infinite versions of versions, just change the setting for each in there to just store the latest.
-
@marcusquinn said in Improve Clone/Backup/Restore Speed:
@Lonk If you use rsync, use Wasabi as it has no ingress costs. Also, in Backblaze, check the lifecycle settings on all buckets to make sure you're not paying to insure infinite versions of versions, just change the setting for each in there to just store the latest.
I did have infinite versions on, thanks for saving me there, I owe ya!
-
@Lonk Everyone does as the sneaky f***ers make it the default. Must have lost thousands of dollars before I found that where everyone else missed it.
-
Wasabi don't make it the default though, and have a much better interface. I'm dropping Backblaze from my recommendations for S3 needs and only use if for personal machine backups which don't have all those extra costs.
-
@robi said in Improve Clone/Backup/Restore Speed:
This feature is provided by tar via an argument -listed-incremental=snapshot-file where a "snapshot-file" is a special file maintained by the tar command to determine the files that are been added,modified or deleted.