Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron

Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron

Scheduled Pinned Locked Moved Solved Support
backupsstoragebox
20 Posts 7 Posters 371 Views 7 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • nebulonN Offline
    nebulonN Offline
    nebulon
    Staff
    wrote last edited by
    #11

    We may have to debug this on your server then, since hetzner storage box via sshfs, rsync and hardlink is one of the best working setups in my experience (I use exactly that also on my personal Cloudron with loads of data).

    Are you by chance using a hetzner subaccount or so? Also maybe the server might run out of memory?

    Either way maybe send a mail to support@cloudron.io while having remote ssh enabled for us, I am sure we can sort this out.

    BenoitB 1 Reply Last reply
    2
    • jdaviescoatesJ Offline
      jdaviescoatesJ Offline
      jdaviescoates
      wrote last edited by
      #12

      Yeah, I've also been using Hetzner Storage Box via sshfs but with tar.gz for years now with zero issues.

      I use Cloudron with Gandi & Hetzner

      1 Reply Last reply
      1
      • nebulonN nebulon

        We may have to debug this on your server then, since hetzner storage box via sshfs, rsync and hardlink is one of the best working setups in my experience (I use exactly that also on my personal Cloudron with loads of data).

        Are you by chance using a hetzner subaccount or so? Also maybe the server might run out of memory?

        Either way maybe send a mail to support@cloudron.io while having remote ssh enabled for us, I am sure we can sort this out.

        BenoitB Offline
        BenoitB Offline
        Benoit
        translator
        wrote last edited by
        #13

        Thanks a lot for your replies. I'm letting the backups finish and I'm contacting Cloudron support in the meantime.

        @nebulon I don't use subaccounts because I know it's risky. I have subfolders in /home/backups and no prefix.

        Do you have a lot of these lines too ?

        Jan 07 11:05:38 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:82:23)
        Jan 07 11:05:38 at ChildProcess.emit (node:events:519:28)
        Jan 07 11:05:38 at maybeClose (node:internal/child_process:1101:16)
        Jan 07 11:05:38 at ChildProcess._handle.onexit (node:internal/child_process:304:5) {
        Jan 07 11:05:38 reason: 'Shell Error',
        Jan 07 11:05:38 details: {},
        Jan 07 11:05:38 stdout: <Buffer >,
        Jan 07 11:05:38 stdoutLineCount: 0,
        Jan 07 11:05:38 stderr: <Buffer 2f 75 73 72 2f 62 69 6e 2f 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 66 63 31 64 33 36 33 38 2d 62 66 ... 54 more bytes>,
        Jan 07 11:05:38 stderrLineCount: 1,
        Jan 07 11:05:38 code: 1,
        Jan 07 11:05:38 signal: null,
        Jan 07 11:05:38 timedOut: false,
        Jan 07 11:05:38 terminated: false
        Jan 07 11:05:38 }
        Jan 07 11:05:38 box:storage/filesystem SSH remote copy failed, trying sshfs copy
        
        1 Reply Last reply
        0
        • nebulonN Offline
          nebulonN Offline
          nebulon
          Staff
          wrote last edited by
          #14

          I have never seen that failing, unless something is wrong with the ssh connection itself, but we really have to decode the stderr buffer (a fix we have for the next cloudron version to make that readable) to get a better understanding why it fails. For the curious people https://git.cloudron.io/platform/box/-/commit/b89aa4488cef24b68e23fdcafdf7773d6ae9e762 would be that change.

          1 Reply Last reply
          2
          • nebulonN Offline
            nebulonN Offline
            nebulon
            Staff
            wrote last edited by
            #15

            So to update everyone here. The root cause is, that the backup site was configure to use /prefix in the remote directory instead of the prefix. Curiously it mostly worked, despite my hetzner storage box only allowing /home as the root folder.

            Anyways the fix is to set the prefix in the backup prefix and use /home in the remote directory. We have to see how to make this less error prone for storage boxes.

            1 Reply Last reply
            2
            • BenoitB Offline
              BenoitB Offline
              Benoit
              translator
              wrote last edited by
              #16

              Many thanks for your support! I’m restarting my backups right away with the correct configuration. I’ll keep you posted on the backup speed, which should be significantly improved now that this configuration issue is fixed.

              1 Reply Last reply
              2
              • I Offline
                I Offline
                ikalou
                wrote last edited by
                #17

                Thanks for this post, I was encountering similar issues using Hetzner object storage (backup of the cloudron server was taking hours, for only 20GB of data), then I switched to Hetzner storage box with the recommendations @nebulon indicated. The backup was done in a matter of minutes 😍
                For some reason my initial trial failed, and when I reconfigured the backup site settings with a new storage box it worked (maybe because this time I enabled External Reachability in Hetzner).

                1 Reply Last reply
                2
                • I Offline
                  I Offline
                  Inli
                  wrote last edited by Inli
                  #18

                  Hello,
                  we have the same mistake on a simple server not Hetzner:

                  Jan 10 17:02:59 box:shell filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_-mnt-managedbackups-12eef56d-d07e-4d77-9c8b-4a60922f4fad -p 22 backup@server cp -aRl snapshot/app_24dc80d3-196a-4865-8b10-8243ab124d79 2026-01-10-160257-862/app_hedgedoc_v1.21.2
                  Jan 10 17:02:59 box:shell filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_-mnt-managedbackups-12eef56d-d07e-4d77-9c8b-4a60922f4fad -p 22 backup@server cp -aRl snapshot/app_24dc80d3-196a-4865-8b10-8243ab124d79 2026-01-10-160257-862/app_hedgedoc_v1.21.2 errored BoxError: ssh exited with code 255 signal null
                  Jan 10 17:02:59 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:82:23)
                  Jan 10 17:02:59 at ChildProcess.emit (node:events:519:28)
                  Jan 10 17:02:59 at maybeClose (node:internal/child_process:1101:16)
                  Jan 10 17:02:59 at ChildProcess._handle.onexit (node:internal/child_process:304:5) {
                  Jan 10 17:02:59 reason: 'Shell Error',
                  Jan 10 17:02:59 details: {},
                  Jan 10 17:02:59 stdout: <Buffer >,
                  Jan 10 17:02:59 stdoutLineCount: 0,
                  Jan 10 17:02:59 stderr: <Buffer 70 6f 70 40 62 6f 61 69 72 65 2e 69 6e 6c 69 2e 6f 72 67 3a 20 50 65 72 6d 69 73 73 69 6f 6e 20 64 65 6e 69 65 64 20 28 70 75 62 6c 69 63 6b 65 79 29 ... 3 more bytes>,
                  Jan 10 17:02:59 stderrLineCount: 1,
                  Jan 10 17:02:59 code: 255,
                  Jan 10 17:02:59 signal: null,
                  Jan 10 17:02:59 timedOut: false,
                  Jan 10 17:02:59 terminated: false
                  Jan 10 17:02:59 }
                  Jan 10 17:02:59 box:backuptask copy: copy to 2026-01-10-160257-862/app_hedgedoc_v1.21.2 errored. error: SSH connection error: ssh exited with code 255 signal null
                  

                  Looks like copy hark links is not made with absolute paths,
                  the absolute path is clearly specified in the backup declaration
                  Can you give us the precise correction procedure?
                  Thank you
                  Gilles

                  1 Reply Last reply
                  1
                  • nebulonN Offline
                    nebulonN Offline
                    nebulon
                    Staff
                    wrote last edited by
                    #19

                    @inli for the Hetzner Storage box, the remote directory in the backup site configuration needs to be /home and you can then set a subfolder if you want within the storage box via the "prefix" setting in the backup site config form.

                    Note that this can only be set while you are adding a new backup site. This cannot be changed in hindsight, so you have to re-add the site in your cloudron.

                    1 Reply Last reply
                    0
                    • girishG Do not disturb
                      girishG Do not disturb
                      girish
                      Staff
                      wrote last edited by
                      #20

                      There was a bug in Cloudron when backing up directories where the contents keep changing very quickly. This is fixed in https://git.cloudron.io/platform/box/-/commit/42cefd56eb9f89cfb6c7e35d6410dd5b7dbfa45d .

                      Other than the above, the slowness is because of the usage of a cache plugin and this cache plugin's data is being backed up. Since it keeps changing all the time, there is a lot of stuff to clean up and reupload each time. A solution is to configure the cache plugin to use /run/xx path instead of /app/data . This should make things considerably faster.

                      1 Reply Last reply
                      1
                      • girishG girish has marked this topic as solved
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search