Backup Improvements: Restic Backend
-
My Cloudron server should store it's backup on a Raspberry Pi running offsite on a different place than the server. I've tried to do this with Minio running on the Raspberry Pi via a Wireguard VPN. While this basically works for a small amount of data, this doesn't work with a huge amount of data (currently ~1TB). The Minio server uses all available resources on the Raspberry Pi and Cloudron stops after 4h, stating the backup takes too long. I've also tried to increase this timeout by fiddling around in the code, but even after many hours the backup doesn't finish. The connection between the Cloudron server and the backup target has 1 Gbit/s, so bandwith is definitively enough.
I did some experiments with Restic and Minio, but the initial backup didn't finish after 8h waiting. So I decided to give Restic rest server a try, this worked much better and also caused much less load on the Raspberry Pi.
Feature request: Please integrate Restic as the backup tool into Cloudron. It has a huge user base and supports a lot of backends, no backend would have to be integrated manually into Cloudron again.
Thanks for considering this suggestion.
-
restic · Backups done right!
https://restic.net/ -
Restic would be really great.
For my non-Cloudron services, I use this Docker image: https://github.com/ViViDboarder/docker-restic-cron.
It wouldn't make sense to take that as is, but it could be a good example of basic Restic functionality for something like Cloudron.
-
Anyone here have experience with Restic and Borg? How do they compare? It seems they are both git style repositories for data.
-
@girish
I have only experience with restic which ticks almost all boxes for me:- fast incremental backups
- encryption
- supporting many cloud providers out of the box or via rclone
- emphasis on data integrity and safety
- backups are mountable for easy access
- comprehensive scripting commands and options
- easy updates irrespective of the underlying OS (
restic self-update
)
One caveat: For pruning backups (e.g. to a 3 monthly, 4 weekly, 5 daily backup strategy), restic has two mechanisms: the
forget
command very quickly deletes database entries to data packets. The separateprune
command then removes such data packets. This process is (currently) very slow (e.g. more than 24 hours for my 300 GB of data) for reasons explained here:When archiving files, restic splits them into smaller "blobs", then bundles these blobs together to "pack files" and uploads the files to the repo. Metadata such as filename, directory structure etc. is converted to a JSON document and also saved there (bundled together in a pack file).That's what's contained in the repo below the
data/
directory. At the end of the backup run, it uploads an "index" file (stored belowindex/
) and a "snapshot" file (insnapshots
). The index file contains a list of the pack file names and their contents (which blobs are stored in a file and where). At the start, restic loads all index files and then knows which blobs are are already saved.When you run
forget
it just removes the really small file insnapshots
, so that operation is fast. In your case, it didn't even remove anything, because there's just one snapshot and you specified--keep-last 1
. For this reason, theprune
command wasn't even run although you specifiedforget --prune
, restic figured out there's nothing to do because no snapshot was removed.When you run
prune
manually on the other hand, it'll gather the list of all pack files, read the headers to discover what's in each file, then starts traversing all snapshots to build a list of all still-referenced blobs, then repacks these blobs into new pack files, uploads a new index (removing the others) and finally removes the pack files that are now unneeded. Whenprune
is run manually, it will run through the whole process. This will also clean up files left over from aborted backups.There are several steps in the
prune
process that are slow, most notably building the list of referenced blobs, because that'll incrementally load single blobs from the repo, and for a remote repo that'll take a lot of time. Theprune
operation is also the most critical one in the whole project: One error there means data loss, and we're trying hard to prevent that. So there are several safeguards, and the process is not yet optimized well. We'll get to that eventually.In order to address this and make
prune
much faster (among others), we've recently added a local cache which keeps all metadata information, the index files, and the snapshot files locally (encrypted of course, just simple copies of files that are in the repo anyway). Maybe you can re-try (ideally with a new repo) using the code in the master branch. That'd speed up prune a lot.There are currently several attempts to speed the process up as documented here: https://github.com/restic/restic/issues/2162
I will try one of the beta builds soon and report back. -
I have restic running as well since a few years for some servers of mine. Main point back then for me was that I could use an s3 endpoint for the data and that data would be deduplicated and encrypted at rest. Its also nice that I can directly pipe my mysql backups to restic.
Pruning the repo can sometimes be a pain, but this is mostly caused (at least on my side) by overlapping backup runs and therefore locks not being properly cleaned up.
-
Just tried pruning my OneDrive backup repo with a newish beta
restic-v0.11.0-246-ge1efc193
from https://beta.restic.net: Pruning now took less than 10 Minutes(!) compared to around 48 hours(!) before.What I use for backing up (daily):
#!/bin/bash d=$(date +%Y-%m-%d) if pidof -o %PPID -x “$0”; then echo “$(date “+%d.%m.%Y %T”) Exit, already running.” exit 1 fi restic -r rclone:onedrive:restic backup /media/Cloudron/snapshot/ -p=resticpw restic -r rclone:onedrive:restic forget --keep-monthly 6 --keep-weekly 4 --keep-daily 7 -p=resticpw
What I use for pruning (once a month):
#!/bin/bash d=$(date +%Y-%m-%d) if pidof -o %PPID -x “$0”; then echo “$(date “+%d.%m.%Y %T”) Exit, already running.” exit 1 fi restic -r rclone:onedrive:restic prune -p=resticpw
Might increase pruning frequency if it proves to be as fast over a longer period...
-
@necrevistonnezr @girish it would be great to have Restic as a third option for backup method.
In the forum I read often “issues” with large backups containing lots of files.
Since this week our museum moved all the local files to Nextcloud (in Cloudron) and it’s a 120GB.
I already reduced backup frequency from twice a day to once a day but still the complete backup (there are more apps) takes almost 2 hours with tgz on a CIFS mount to a Hetzner StorageBox (connection speed is about 150-200mbps).
As far as I can see Restic looks like perfect for all kind of backup scenarios?
-
-
@girish I don't use Restic on Cloudron but it seems @necrevistonnezr does according to his post.
I do use it for backing up two Zabbix servers to minio (in Docker on two Synology Nas's) and that works extremely simple and fast.
-
@girish said in Backup Improvements: Restic Backend:
@imc67 does restic backup faster to CIFS with your existing data size?
restic needs less than approx. 30 minutes on average to create the daily incremental backup on OneDrive (remember that I use the builtin file system backup and let restic create backups from the snapshot folder which hold around 250 GB of data, thereof 150 GB Nextcloud)
-
@girish @necrevistonnezr said in Backup Improvements: Restic Backend:
approx. 30 minutes
that is extremely fast, CIFS + tgz and almost same amount of GB's takes almost 2 hours
-
-
-
-
@girish any plan to improve the backup solution on cloudron maybe with Restic as engine?
-
@MooCloud_Matt we are rewriting the storage backend a bit in https://forum.cloudron.io/topic/6768/what-s-coming-in-7-2 . Part of the reason is to make more backends easier to integrate.
-
Are there any plans to add Restic as an extra backup method? Two of my Cloudrons are in the meanwhile +200GB and the current methods are not sufficient.
-
Not yet but I would like to discuss one thing here. Backups are crucial and loss of data for us implies loss of business and money quite literally. This is why we wrote the backup code ourselves a while ago. Also, why we create our own packages - it's all about data integrity and loss of data === loss of trust in product.
Initially, before we wrote our own backup stuff, I remember we used duplicati and btrfs etc . We faced various issues and there was essentially no help from upstream. Now, restic I am sure is great but if there is some corruption or issue, our customers will look to us to solve this. So, this is a tricky situation for us Maybe we can do some restic integration with lots of warnings? End user also has to know what to do if there is restic corruption and other issues. Keep in mind restic is also not 1.0 yet . They say "Once version 1.0.0 is released, we guarantee backward compatibility of all repositories within one major version; ...".
Any suggestions?
-