Backup Improvements: Restic Backend
tobru last edited by girish
My Cloudron server should store it's backup on a Raspberry Pi running offsite on a different place than the server. I've tried to do this with Minio running on the Raspberry Pi via a Wireguard VPN. While this basically works for a small amount of data, this doesn't work with a huge amount of data (currently ~1TB). The Minio server uses all available resources on the Raspberry Pi and Cloudron stops after 4h, stating the backup takes too long. I've also tried to increase this timeout by fiddling around in the code, but even after many hours the backup doesn't finish. The connection between the Cloudron server and the backup target has 1 Gbit/s, so bandwith is definitively enough.
I did some experiments with Restic and Minio, but the initial backup didn't finish after 8h waiting. So I decided to give Restic rest server a try, this worked much better and also caused much less load on the Raspberry Pi.
Feature request: Please integrate Restic as the backup tool into Cloudron. It has a huge user base and supports a lot of backends, no backend would have to be integrated manually into Cloudron again.
Thanks for considering this suggestion.
Hillside502 last edited by
restic · Backups done right!
Restic would be really great.
For my non-Cloudron services, I use this Docker image: https://github.com/ViViDboarder/docker-restic-cron.
It wouldn't make sense to take that as is, but it could be a good example of basic Restic functionality for something like Cloudron.
Anyone here have experience with Restic and Borg? How do they compare? It seems they are both git style repositories for data.
necrevistonnezr last edited by necrevistonnezr
I have only experience with restic which ticks almost all boxes for me:
- fast incremental backups
- supporting many cloud providers out of the box or via rclone
- emphasis on data integrity and safety
- backups are mountable for easy access
- comprehensive scripting commands and options
- easy updates irrespective of the underlying OS (
One caveat: For pruning backups (e.g. to a 3 monthly, 4 weekly, 5 daily backup strategy), restic has two mechanisms: the
forgetcommand very quickly deletes database entries to data packets. The separate
prunecommand then removes such data packets. This process is (currently) very slow (e.g. more than 24 hours for my 300 GB of data) for reasons explained here:
When archiving files, restic splits them into smaller "blobs", then bundles these blobs together to "pack files" and uploads the files to the repo. Metadata such as filename, directory structure etc. is converted to a JSON document and also saved there (bundled together in a pack file).That's what's contained in the repo below the
data/directory. At the end of the backup run, it uploads an "index" file (stored below
index/) and a "snapshot" file (in
snapshots). The index file contains a list of the pack file names and their contents (which blobs are stored in a file and where). At the start, restic loads all index files and then knows which blobs are are already saved.
When you run
forgetit just removes the really small file in
snapshots, so that operation is fast. In your case, it didn't even remove anything, because there's just one snapshot and you specified
--keep-last 1. For this reason, the
prunecommand wasn't even run although you specified
forget --prune, restic figured out there's nothing to do because no snapshot was removed.
When you run
prunemanually on the other hand, it'll gather the list of all pack files, read the headers to discover what's in each file, then starts traversing all snapshots to build a list of all still-referenced blobs, then repacks these blobs into new pack files, uploads a new index (removing the others) and finally removes the pack files that are now unneeded. When
pruneis run manually, it will run through the whole process. This will also clean up files left over from aborted backups.
There are several steps in the
pruneprocess that are slow, most notably building the list of referenced blobs, because that'll incrementally load single blobs from the repo, and for a remote repo that'll take a lot of time. The
pruneoperation is also the most critical one in the whole project: One error there means data loss, and we're trying hard to prevent that. So there are several safeguards, and the process is not yet optimized well. We'll get to that eventually.
In order to address this and make
prunemuch faster (among others), we've recently added a local cache which keeps all metadata information, the index files, and the snapshot files locally (encrypted of course, just simple copies of files that are in the repo anyway). Maybe you can re-try (ideally with a new repo) using the code in the master branch. That'd speed up prune a lot.
There are currently several attempts to speed the process up as documented here: https://github.com/restic/restic/issues/2162
I will try one of the beta builds soon and report back.
I have restic running as well since a few years for some servers of mine. Main point back then for me was that I could use an s3 endpoint for the data and that data would be deduplicated and encrypted at rest. Its also nice that I can directly pipe my mysql backups to restic.
Pruning the repo can sometimes be a pain, but this is mostly caused (at least on my side) by overlapping backup runs and therefore locks not being properly cleaned up.
necrevistonnezr last edited by
Just tried pruning my OneDrive backup repo with a newish beta
restic-v0.11.0-246-ge1efc193from https://beta.restic.net: Pruning now took less than 10 Minutes(!) compared to around 48 hours(!) before.
What I use for backing up (daily):
#!/bin/bash d=$(date +%Y-%m-%d) if pidof -o %PPID -x “$0”; then echo “$(date “+%d.%m.%Y %T”) Exit, already running.” exit 1 fi restic -r rclone:onedrive:restic backup /media/Cloudron/snapshot/ -p=resticpw restic -r rclone:onedrive:restic forget --keep-monthly 6 --keep-weekly 4 --keep-daily 7 -p=resticpw
What I use for pruning (once a month):
#!/bin/bash d=$(date +%Y-%m-%d) if pidof -o %PPID -x “$0”; then echo “$(date “+%d.%m.%Y %T”) Exit, already running.” exit 1 fi restic -r rclone:onedrive:restic prune -p=resticpw
Might increase pruning frequency if it proves to be as fast over a longer period...