Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

Cloudron Forum

Apps | Demo | Docs | Install

Backup Improvements: Restic Backend

Scheduled Pinned Locked Moved Feature Requests
backupsfeature-requestimprovementrestic
24 Posts 9 Posters 1.6k Views
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • tobruT Offline
    tobruT Offline
    tobru
    wrote on last edited by girish
    #1

    My Cloudron server should store it's backup on a Raspberry Pi running offsite on a different place than the server. I've tried to do this with Minio running on the Raspberry Pi via a Wireguard VPN. While this basically works for a small amount of data, this doesn't work with a huge amount of data (currently ~1TB). The Minio server uses all available resources on the Raspberry Pi and Cloudron stops after 4h, stating the backup takes too long. I've also tried to increase this timeout by fiddling around in the code, but even after many hours the backup doesn't finish. The connection between the Cloudron server and the backup target has 1 Gbit/s, so bandwith is definitively enough.

    I did some experiments with Restic and Minio, but the initial backup didn't finish after 8h waiting. So I decided to give Restic rest server a try, this worked much better and also caused much less load on the Raspberry Pi.

    Feature request: Please integrate Restic as the backup tool into Cloudron. It has a huge user base and supports a lot of backends, no backend would have to be integrated manually into Cloudron again.

    Thanks for considering this suggestion.

    1 Reply Last reply
    19
  • ? Offline
    ? Offline
    A Former User
    wrote on last edited by
    #2

    restic · Backups done right!
    https://restic.net/

    1 Reply Last reply
    3
  • iamthefijI Offline
    iamthefijI Offline
    iamthefij App Dev
    wrote on last edited by
    #3

    Restic would be really great.

    For my non-Cloudron services, I use this Docker image: https://github.com/ViViDboarder/docker-restic-cron.

    It wouldn't make sense to take that as is, but it could be a good example of basic Restic functionality for something like Cloudron.

    1 Reply Last reply
    1
  • girishG Offline
    girishG Offline
    girish Staff
    wrote on last edited by
    #4

    Anyone here have experience with Restic and Borg? How do they compare? It seems they are both git style repositories for data.

    necrevistonnezrN 1 Reply Last reply
    1
  • necrevistonnezrN Away
    necrevistonnezrN Away
    necrevistonnezr
    replied to girish on last edited by necrevistonnezr
    #5

    @girish
    I have only experience with restic which ticks almost all boxes for me:

    • fast incremental backups
    • encryption
    • supporting many cloud providers out of the box or via rclone
    • emphasis on data integrity and safety
    • backups are mountable for easy access
    • comprehensive scripting commands and options
    • easy updates irrespective of the underlying OS (restic self-update)

    One caveat: For pruning backups (e.g. to a 3 monthly, 4 weekly, 5 daily backup strategy), restic has two mechanisms: the forget command very quickly deletes database entries to data packets. The separate prune command then removes such data packets. This process is (currently) very slow (e.g. more than 24 hours for my 300 GB of data) for reasons explained here:

    When archiving files, restic splits them into smaller "blobs", then bundles these blobs together to "pack files" and uploads the files to the repo. Metadata such as filename, directory structure etc. is converted to a JSON document and also saved there (bundled together in a pack file).That's what's contained in the repo below the data/ directory. At the end of the backup run, it uploads an "index" file (stored below index/) and a "snapshot" file (in snapshots). The index file contains a list of the pack file names and their contents (which blobs are stored in a file and where). At the start, restic loads all index files and then knows which blobs are are already saved.

    When you run forget it just removes the really small file in snapshots, so that operation is fast. In your case, it didn't even remove anything, because there's just one snapshot and you specified --keep-last 1. For this reason, the prune command wasn't even run although you specified forget --prune, restic figured out there's nothing to do because no snapshot was removed.

    When you run prune manually on the other hand, it'll gather the list of all pack files, read the headers to discover what's in each file, then starts traversing all snapshots to build a list of all still-referenced blobs, then repacks these blobs into new pack files, uploads a new index (removing the others) and finally removes the pack files that are now unneeded. When prune is run manually, it will run through the whole process. This will also clean up files left over from aborted backups.

    There are several steps in the prune process that are slow, most notably building the list of referenced blobs, because that'll incrementally load single blobs from the repo, and for a remote repo that'll take a lot of time. The prune operation is also the most critical one in the whole project: One error there means data loss, and we're trying hard to prevent that. So there are several safeguards, and the process is not yet optimized well. We'll get to that eventually.

    In order to address this and make prune much faster (among others), we've recently added a local cache which keeps all metadata information, the index files, and the snapshot files locally (encrypted of course, just simple copies of files that are in the repo anyway). Maybe you can re-try (ideally with a new repo) using the code in the master branch. That'd speed up prune a lot.

    There are currently several attempts to speed the process up as documented here: https://github.com/restic/restic/issues/2162
    I will try one of the beta builds soon and report back.

    1 Reply Last reply
    2
  • fbartelsF Offline
    fbartelsF Offline
    fbartels App Dev
    wrote on last edited by
    #6

    I have restic running as well since a few years for some servers of mine. Main point back then for me was that I could use an s3 endpoint for the data and that data would be deduplicated and encrypted at rest. Its also nice that I can directly pipe my mysql backups to restic.

    Pruning the repo can sometimes be a pain, but this is mostly caused (at least on my side) by overlapping backup runs and therefore locks not being properly cleaned up.

    1 Reply Last reply
    2
  • necrevistonnezrN Away
    necrevistonnezrN Away
    necrevistonnezr
    wrote on last edited by
    #7

    Just tried pruning my OneDrive backup repo with a newish beta restic-v0.11.0-246-ge1efc193 from https://beta.restic.net: Pruning now took less than 10 Minutes(!) compared to around 48 hours(!) before.

    What I use for backing up (daily):

    #!/bin/bash
    d=$(date +%Y-%m-%d)
    if pidof -o %PPID -x “$0”; then
    echo “$(date “+%d.%m.%Y %T”) Exit, already running.”
    exit 1
    fi
    restic -r rclone:onedrive:restic backup /media/Cloudron/snapshot/ -p=resticpw
    restic -r rclone:onedrive:restic forget --keep-monthly 6 --keep-weekly 4 --keep-daily 7 -p=resticpw
    

    What I use for pruning (once a month):

    #!/bin/bash
    d=$(date +%Y-%m-%d)
    if pidof -o %PPID -x “$0”; then
    echo “$(date “+%d.%m.%Y %T”) Exit, already running.”
    exit 1
    fi
    restic -r rclone:onedrive:restic prune -p=resticpw
    

    Might increase pruning frequency if it proves to be as fast over a longer period...

    imc67I 1 Reply Last reply
    3
  • imc67I Offline
    imc67I Offline
    imc67 translator
    replied to necrevistonnezr on last edited by
    #8

    @necrevistonnezr @girish it would be great to have Restic as a third option for backup method.

    In the forum I read often “issues” with large backups containing lots of files.

    Since this week our museum moved all the local files to Nextcloud (in Cloudron) and it’s a 120GB.

    I already reduced backup frequency from twice a day to once a day but still the complete backup (there are more apps) takes almost 2 hours with tgz on a CIFS mount to a Hetzner StorageBox (connection speed is about 150-200mbps).

    As far as I can see Restic looks like perfect for all kind of backup scenarios?

    girishG 1 Reply Last reply
    2
  • girishG Offline
    girishG Offline
    girish Staff
    replied to imc67 on last edited by
    #9

    @imc67 does restic backup faster to CIFS with your existing data size?

    imc67I necrevistonnezrN 2 Replies Last reply
    0
  • imc67I Offline
    imc67I Offline
    imc67 translator
    replied to girish on last edited by
    #10

    @girish I don't use Restic on Cloudron but it seems @necrevistonnezr does according to his post.

    I do use it for backing up two Zabbix servers to minio (in Docker on two Synology Nas's) and that works extremely simple and fast.

    1 Reply Last reply
    1
  • necrevistonnezrN Away
    necrevistonnezrN Away
    necrevistonnezr
    replied to girish on last edited by
    #11

    @girish said in Backup Improvements: Restic Backend:

    @imc67 does restic backup faster to CIFS with your existing data size?

    restic needs less than approx. 30 minutes on average to create the daily incremental backup on OneDrive (remember that I use the builtin file system backup and let restic create backups from the snapshot folder which hold around 250 GB of data, thereof 150 GB Nextcloud)

    imc67I 1 Reply Last reply
    1
  • imc67I Offline
    imc67I Offline
    imc67 translator
    replied to necrevistonnezr on last edited by
    #12

    @girish @necrevistonnezr said in Backup Improvements: Restic Backend:

    approx. 30 minutes

    that is extremely fast, CIFS + tgz and almost same amount of GB's takes almost 2 hours

    girishG 1 Reply Last reply
    2
  • girishG Offline
    girishG Offline
    girish Staff
    replied to imc67 on last edited by
    #13

    @imc67 CIFS has to be tested on your specific mount because each CIFS mount is totally different (since it's network based and also depends on remote disk speed etc). But it's good to have a ball park number in any case.

    1 Reply Last reply
    2
  • imc67I imc67 referenced this topic on
  • avatar1024A avatar1024 referenced this topic on
  • MooCloud_MattM Offline
    MooCloud_MattM Offline
    MooCloud_Matt
    wrote on last edited by
    #14

    @girish any plan to improve the backup solution on cloudron maybe with Restic as engine?

    Matteo. R.
    Founder and Tech-Support Manager.
    MooCloud MSP
    Swiss Managed Service Provider

    girishG 1 Reply Last reply
    2
  • girishG Offline
    girishG Offline
    girish Staff
    replied to MooCloud_Matt on last edited by
    #15

    @MooCloud_Matt we are rewriting the storage backend a bit in https://forum.cloudron.io/topic/6768/what-s-coming-in-7-2 . Part of the reason is to make more backends easier to integrate.

    robiR 1 Reply Last reply
    2
  • robiR Offline
    robiR Offline
    robi
    replied to girish on last edited by
    #16

    @girish to give you a few more ideas, this is how FlyWheel implemented their backups add-on with both restic and rclone.

    https://github.com/getflywheel/local-addon-backups

    I think you'll find it helpful.

    Life of sky tech

    1 Reply Last reply
    4
  • imc67I Offline
    imc67I Offline
    imc67 translator
    wrote on last edited by
    #17

    Are there any plans to add Restic as an extra backup method? Two of my Cloudrons are in the meanwhile +200GB and the current methods are not sufficient.

    girishG 1 Reply Last reply
    0
  • girishG Offline
    girishG Offline
    girish Staff
    wrote on last edited by
    #18

    Not yet but I would like to discuss one thing here. Backups are crucial and loss of data for us implies loss of business and money quite literally. This is why we wrote the backup code ourselves a while ago. Also, why we create our own packages - it's all about data integrity and loss of data === loss of trust in product.

    Initially, before we wrote our own backup stuff, I remember we used duplicati and btrfs etc . We faced various issues and there was essentially no help from upstream. Now, restic I am sure is great but if there is some corruption or issue, our customers will look to us to solve this. So, this is a tricky situation for us 🙂 Maybe we can do some restic integration with lots of warnings? End user also has to know what to do if there is restic corruption and other issues. Keep in mind restic is also not 1.0 yet . They say "Once version 1.0.0 is released, we guarantee backward compatibility of all repositories within one major version; ...".

    Any suggestions?

    1 Reply Last reply
    1
  • girishG Offline
    girishG Offline
    girish Staff
    replied to imc67 on last edited by
    #19

    this also reminded me of @nebulon 's suggestion of having more provider specific tweaking.

    @imc67 are you using hetzner storage box or similar for backups ? what's the plan ? For storage backends that have rsync running, we can do lot better than now.

    imc67I 1 Reply Last reply
    0
  • imc67I Offline
    imc67I Offline
    imc67 translator
    replied to girish on last edited by
    #20

    @girish yes indeed I use Hetzner Storagebox and use tgz over CIFS. Is there a better/smarter/quicker way?

    girishG 1 Reply Last reply
    0

  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Login

  • Don't have an account? Register

  • Login or register to search.