Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron

Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron

Scheduled Pinned Locked Moved Unsolved Support
backupsstoragebox
17 Posts 5 Posters 114 Views 5 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • BenoitB Offline
    BenoitB Offline
    Benoit
    translator
    wrote last edited by joseph
    #1

    Hi,

    I’m experiencing extremely slow Cloudron backups to a Hetzner Storage Box, regardless of the backup format (rsync or tar.gz, encryption disabled).

    Context

    Backup target: Hetzner Storage Box (SSH/rsync)

    Tested formats: rsync and tar.gz

    Encryption: disabled

    Nextcloud apps excluded

    Goal: move away from MinIO, which I previously ran on a dedicated Cloudron, and use Storage Box instead

    Observed behavior
    The upload phase completes, but backups then stall for a long time during the rotation / copy phase. I consistently see SSH copy errors followed by a fallback to sshfs:

    box:backuptask uploadAppSnapshot: <APP_DOMAIN> uploaded to snapshot/app_<APP_ID>
    box:backuptask backupAppWithTag: rotating snapshot to timestamped backup
    
    box:shell filesystem: ssh ... cp -aR snapshot/app_<APP_ID> <BACKUP_PATH>
    /usr/bin/cp: cannot stat 'snapshot/app_<APP_ID>'
    
    box:storage/filesystem SSH remote copy failed, trying sshfs copy
    box:shell filesystem: cp -aR /mnt/managedbackups/<CLOUDRON_ID>/snapshot/app_<APP_ID> \
    /mnt/managedbackups/<CLOUDRON_ID>/<TIMESTAMP>/app_<APP_NAME>_v<APP_VERSION>
    

    After this, backups often stay at the same percentage for 20–30+ minutes, sometimes much longer, even though CPU and RAM usage are low.

    System

    Cloudron: 9.0.13

    OS: Ubuntu 24.04.1 LTS (kernel 6.8)

    VM: 4 vCPU, ~21 GB RAM

    Hypervisor: QEMU

    Questions

    Is Hetzner Storage Box a recommended/supported backend for Cloudron backups?

    Is this SSH cp -aR failure and sshfs fallback expected?

    Could this explain the very long backup times?

    Are there recommended settings (hardlinks, concurrency, format) when using Storage Box?

    Thanks for any guidance.

    jdaviescoatesJ 1 Reply Last reply
    0
    • BenoitB Benoit

      Hi,

      I’m experiencing extremely slow Cloudron backups to a Hetzner Storage Box, regardless of the backup format (rsync or tar.gz, encryption disabled).

      Context

      Backup target: Hetzner Storage Box (SSH/rsync)

      Tested formats: rsync and tar.gz

      Encryption: disabled

      Nextcloud apps excluded

      Goal: move away from MinIO, which I previously ran on a dedicated Cloudron, and use Storage Box instead

      Observed behavior
      The upload phase completes, but backups then stall for a long time during the rotation / copy phase. I consistently see SSH copy errors followed by a fallback to sshfs:

      box:backuptask uploadAppSnapshot: <APP_DOMAIN> uploaded to snapshot/app_<APP_ID>
      box:backuptask backupAppWithTag: rotating snapshot to timestamped backup
      
      box:shell filesystem: ssh ... cp -aR snapshot/app_<APP_ID> <BACKUP_PATH>
      /usr/bin/cp: cannot stat 'snapshot/app_<APP_ID>'
      
      box:storage/filesystem SSH remote copy failed, trying sshfs copy
      box:shell filesystem: cp -aR /mnt/managedbackups/<CLOUDRON_ID>/snapshot/app_<APP_ID> \
      /mnt/managedbackups/<CLOUDRON_ID>/<TIMESTAMP>/app_<APP_NAME>_v<APP_VERSION>
      

      After this, backups often stay at the same percentage for 20–30+ minutes, sometimes much longer, even though CPU and RAM usage are low.

      System

      Cloudron: 9.0.13

      OS: Ubuntu 24.04.1 LTS (kernel 6.8)

      VM: 4 vCPU, ~21 GB RAM

      Hypervisor: QEMU

      Questions

      Is Hetzner Storage Box a recommended/supported backend for Cloudron backups?

      Is this SSH cp -aR failure and sshfs fallback expected?

      Could this explain the very long backup times?

      Are there recommended settings (hardlinks, concurrency, format) when using Storage Box?

      Thanks for any guidance.

      jdaviescoatesJ Offline
      jdaviescoatesJ Offline
      jdaviescoates
      wrote last edited by jdaviescoates
      #2

      @Benoit said in Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron:

      Could this explain the very long backup times?

      How long for how much data?

      FYI I use tar.gz via SSHFS to Hetzner Storage Box and my most recent backs took about 2.5 hours for about 400GB of data.

      I think hardlinks are always recommended?

      I use Cloudron with Gandi & Hetzner

      BenoitB 1 Reply Last reply
      0
      • jdaviescoatesJ jdaviescoates

        @Benoit said in Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron:

        Could this explain the very long backup times?

        How long for how much data?

        FYI I use tar.gz via SSHFS to Hetzner Storage Box and my most recent backs took about 2.5 hours for about 400GB of data.

        I think hardlinks are always recommended?

        BenoitB Offline
        BenoitB Offline
        Benoit
        translator
        wrote last edited by
        #3

        @jdaviescoates I started the backup 2.5 hours ago and it is currently at 32% for a little over 200 GB to back up using rsync (first full backup).
        I did not enable hardlinks.
        Since this Storage Box will be used to back up multiple Cloudron instances, I left the setting at 10 simultaneous uploads.

        1 Reply Last reply
        1
        • jdaviescoatesJ Offline
          jdaviescoatesJ Offline
          jdaviescoates
          wrote last edited by
          #4

          I think first time rsync generally takes ages and then its faster afterwards. I've not tried it with my Cloudron, but once I update to Cloudron 9 I'm planning to do tar.gz to one location and rsync to another.

          I use Cloudron with Gandi & Hetzner

          BenoitB 1 Reply Last reply
          0
          • jdaviescoatesJ jdaviescoates

            I think first time rsync generally takes ages and then its faster afterwards. I've not tried it with my Cloudron, but once I update to Cloudron 9 I'm planning to do tar.gz to one location and rsync to another.

            BenoitB Offline
            BenoitB Offline
            Benoit
            translator
            wrote last edited by
            #5

            @jdaviescoates No progress since my last message, still stuck at 32%, and this is what I see in the logs:

            Jan 06 15:12:30 box:syncer finalize: patching in integrity information into /home/yellowtent/platformdata/backup/03513981-96d3-4130-832d-ff15d8a124fc/5ed6495a-866e-4899-9d1d-437bafe6473a.sync.cache
            Jan 06 15:12:30 box:backuptask upload: path snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a site 03513981-96d3-4130-832d-ff15d8a124fc uploaded: {"transferred":117746128,"size":117746128,"fileCount":8678}
            Jan 06 15:12:30 box:tasks updating task 21170 with: {"percent":31.76923076923078,"message":"Uploading integrity information to snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a.backupinfo (matomo.monapp.fr)"}
            Jan 06 15:12:30 box:backupupload upload completed. error: null
            Jan 06 15:12:30 box:backuptask runBackupUpload: result - {"result":{"stats":{"transferred":117746128,"size":117746128,"fileCount":8678},"integrity":{"signature":"a7be7fd4df5bf668148d531a015dbbc37f4ade8da8d130800050b9e563f2694b20f1923c2f76c7467ad42fe8a0c9431e0ac747a906fe9b855058414ec2d3f70b"}}}
            Jan 06 15:12:30 box:backuptask uploadAppSnapshot: matomo.monapp.fr uploaded to snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a. 690.868 seconds
            Jan 06 15:12:30 box:backuptask backupAppWithTag: rotating matomo.monapp.fr snapshot of 03513981-96d3-4130-832d-ff15d8a124fc to path 2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2
            Jan 06 15:12:30 box:tasks updating task 21170 with: {"percent":31.76923076923078,"message":"Copying /mnt/managedbackups/03513981-96d3-4130-832d-ff15d8a124fc/snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a to /mnt/managedbackups/03513981-96d3-4130-832d-ff15d8a124fc/2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2"}
            Jan 06 15:12:30 box:shell filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_-mnt-managedbackups-03513981-96d3-4130-832d-ff15d8a124fc -p 23 user@user.your-storagebox.de cp -aR snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a 2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2
            Jan 06 15:12:31 box:shell filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_-mnt-managedbackups-03513981-96d3-4130-832d-ff15d8a124fc -p 23 user@user.your-storagebox.de cp -aR snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a 2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2 errored BoxError: ssh exited with code 1 signal null
            Jan 06 15:12:31 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:82:23)
            Jan 06 15:12:31 at ChildProcess.emit (node:events:519:28)
            Jan 06 15:12:31 at maybeClose (node:internal/child_process:1101:16)
            Jan 06 15:12:31 at ChildProcess._handle.onexit (node:internal/child_process:304:5) {
            Jan 06 15:12:31 reason: 'Shell Error',
            Jan 06 15:12:31 details: {},
            Jan 06 15:12:31 stdout: <Buffer >,
            Jan 06 15:12:31 stdoutLineCount: 0,
            Jan 06 15:12:31 stderr: <Buffer 2f 75 73 72 2f 62 69 6e 2f 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 35 65 64 36 34 39 35 61 2d 38 36 ... 54 more bytes>,
            Jan 06 15:12:31 stderrLineCount: 1,
            Jan 06 15:12:31 code: 1,
            Jan 06 15:12:31 signal: null,
            Jan 06 15:12:31 timedOut: false,
            Jan 06 15:12:31 terminated: false
            Jan 06 15:12:31 }
            Jan 06 15:12:31 box:storage/filesystem SSH remote copy failed, trying sshfs copy
            Jan 06 15:12:31 box:shell filesystem: cp -aR /mnt/managedbackups/03513981-96d3-4130-832d-ff15d8a124fc/snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a /mnt/managedbackups/03513981-96d3-4130-832d-ff15d8a124fc/2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2
            
            
            timconsidineT 1 Reply Last reply
            0
            • BenoitB Benoit

              @jdaviescoates No progress since my last message, still stuck at 32%, and this is what I see in the logs:

              Jan 06 15:12:30 box:syncer finalize: patching in integrity information into /home/yellowtent/platformdata/backup/03513981-96d3-4130-832d-ff15d8a124fc/5ed6495a-866e-4899-9d1d-437bafe6473a.sync.cache
              Jan 06 15:12:30 box:backuptask upload: path snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a site 03513981-96d3-4130-832d-ff15d8a124fc uploaded: {"transferred":117746128,"size":117746128,"fileCount":8678}
              Jan 06 15:12:30 box:tasks updating task 21170 with: {"percent":31.76923076923078,"message":"Uploading integrity information to snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a.backupinfo (matomo.monapp.fr)"}
              Jan 06 15:12:30 box:backupupload upload completed. error: null
              Jan 06 15:12:30 box:backuptask runBackupUpload: result - {"result":{"stats":{"transferred":117746128,"size":117746128,"fileCount":8678},"integrity":{"signature":"a7be7fd4df5bf668148d531a015dbbc37f4ade8da8d130800050b9e563f2694b20f1923c2f76c7467ad42fe8a0c9431e0ac747a906fe9b855058414ec2d3f70b"}}}
              Jan 06 15:12:30 box:backuptask uploadAppSnapshot: matomo.monapp.fr uploaded to snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a. 690.868 seconds
              Jan 06 15:12:30 box:backuptask backupAppWithTag: rotating matomo.monapp.fr snapshot of 03513981-96d3-4130-832d-ff15d8a124fc to path 2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2
              Jan 06 15:12:30 box:tasks updating task 21170 with: {"percent":31.76923076923078,"message":"Copying /mnt/managedbackups/03513981-96d3-4130-832d-ff15d8a124fc/snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a to /mnt/managedbackups/03513981-96d3-4130-832d-ff15d8a124fc/2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2"}
              Jan 06 15:12:30 box:shell filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_-mnt-managedbackups-03513981-96d3-4130-832d-ff15d8a124fc -p 23 user@user.your-storagebox.de cp -aR snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a 2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2
              Jan 06 15:12:31 box:shell filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_-mnt-managedbackups-03513981-96d3-4130-832d-ff15d8a124fc -p 23 user@user.your-storagebox.de cp -aR snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a 2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2 errored BoxError: ssh exited with code 1 signal null
              Jan 06 15:12:31 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:82:23)
              Jan 06 15:12:31 at ChildProcess.emit (node:events:519:28)
              Jan 06 15:12:31 at maybeClose (node:internal/child_process:1101:16)
              Jan 06 15:12:31 at ChildProcess._handle.onexit (node:internal/child_process:304:5) {
              Jan 06 15:12:31 reason: 'Shell Error',
              Jan 06 15:12:31 details: {},
              Jan 06 15:12:31 stdout: <Buffer >,
              Jan 06 15:12:31 stdoutLineCount: 0,
              Jan 06 15:12:31 stderr: <Buffer 2f 75 73 72 2f 62 69 6e 2f 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 35 65 64 36 34 39 35 61 2d 38 36 ... 54 more bytes>,
              Jan 06 15:12:31 stderrLineCount: 1,
              Jan 06 15:12:31 code: 1,
              Jan 06 15:12:31 signal: null,
              Jan 06 15:12:31 timedOut: false,
              Jan 06 15:12:31 terminated: false
              Jan 06 15:12:31 }
              Jan 06 15:12:31 box:storage/filesystem SSH remote copy failed, trying sshfs copy
              Jan 06 15:12:31 box:shell filesystem: cp -aR /mnt/managedbackups/03513981-96d3-4130-832d-ff15d8a124fc/snapshot/app_5ed6495a-866e-4899-9d1d-437bafe6473a /mnt/managedbackups/03513981-96d3-4130-832d-ff15d8a124fc/2026-01-06-114408-065/app_matomo.monapp.fr_v1.53.2
              
              
              timconsidineT Online
              timconsidineT Online
              timconsidine
              App Dev
              wrote last edited by
              #6

              @Benoit my first rsync backup did take a long time, but subsequent ones much faster.
              I do a tar.gz backup as well, which takes longer.

              Both use Hetzner Storage Box.
              The tar.gz backups are about the same speed to ScalewayS3.

              9.0.15	hetznerstoragebox-rsync	125 App(s)	100.32 GB | 251317 file(s)	5 Jan 2026 at 12:11	
              9.0.15	hetzner-storage-box	125 App(s)	82.17 GB | 251258 file(s)	5 Jan 2026 at 03:04	
              
              1 Reply Last reply
              0
              • nebulonN Offline
                nebulonN Offline
                nebulon
                Staff
                wrote last edited by
                #7

                So yes the speed will degrade quite a bit if the remote copy from snapshot to final folder fails. So the question then is why /usr/bin/cp: cannot stat 'snapshot/app_<APP_ID>' happens. Since this happens after the download (and assuming this is tgz format), maybe the file did not yet appear in the storage box there? Not sure how this can be the case, unless the snapshot upload has failed before.

                But maybe can you try sshfs with tgz again and if it fails check on the storage itself if the file exists?

                1 Reply Last reply
                0
                • nebulonN Offline
                  nebulonN Offline
                  nebulon
                  Staff
                  wrote last edited by
                  #8

                  Generally though, I would very much recommend hetzner storage box with rsync and hardlinks...that means the initial upload is slower but afterwards only the difference is essentially uploaded.

                  BenoitB 1 Reply Last reply
                  0
                  • nebulonN nebulon

                    Generally though, I would very much recommend hetzner storage box with rsync and hardlinks...that means the initial upload is slower but afterwards only the difference is essentially uploaded.

                    BenoitB Offline
                    BenoitB Offline
                    Benoit
                    translator
                    wrote last edited by
                    #9

                    @nebulon I just stopped the backup at 55%! I checked the hardlinks box. I have a VM backup scheduled in an hour, so I've rescheduled the storage box backup for tonight; we'll see tomorrow if it goes through. I'll keep you posted.

                    BenoitB 1 Reply Last reply
                    1
                    • BenoitB Benoit

                      @nebulon I just stopped the backup at 55%! I checked the hardlinks box. I have a VM backup scheduled in an hour, so I've rescheduled the storage box backup for tonight; we'll see tomorrow if it goes through. I'll keep you posted.

                      BenoitB Offline
                      BenoitB Offline
                      Benoit
                      translator
                      wrote last edited by
                      #10

                      I've attempted multiple backups, but the results are still terrible. In my opinion, the storage box isn't production-ready for Cloudron. Even the delta sync with rsync takes forever. Case in point: a backup has been running for 6 hours and is only at 57% of 77GB. I'm moving away from this solution to look for alternatives.

                      In all my logs i find these lines which is the point when it takes an infinity time...

                      Jan 07 09:48:36 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:82:23)
                      Jan 07 09:48:36 at ChildProcess.emit (node:events:519:28)
                      Jan 07 09:48:36 at maybeClose (node:internal/child_process:1101:16)
                      Jan 07 09:48:36 at ChildProcess._handle.onexit (node:internal/child_process:304:5) {
                      Jan 07 09:48:36 reason: 'Shell Error',
                      Jan 07 09:48:36 details: {},
                      Jan 07 09:48:36 stdout: <Buffer >,
                      Jan 07 09:48:36 stdoutLineCount: 0,
                      Jan 07 09:48:36 stderr: <Buffer 2f 75 73 72 2f 62 69 6e 2f 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 6e 75 6d 73 6f 6c 2d 2f 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 37 34 30 ... 62 more bytes>,
                      Jan 07 09:48:36 stderrLineCount: 1,
                      Jan 07 09:48:36 code: 1,
                      Jan 07 09:48:36 signal: null,
                      Jan 07 09:48:36 timedOut: false,
                      Jan 07 09:48:36 terminated: false
                      Jan 07 09:48:36 }
                      Jan 07 09:48:36 box:storage/filesystem SSH remote copy failed, trying sshfs copy
                      
                      1 Reply Last reply
                      0
                      • nebulonN Offline
                        nebulonN Offline
                        nebulon
                        Staff
                        wrote last edited by
                        #11

                        We may have to debug this on your server then, since hetzner storage box via sshfs, rsync and hardlink is one of the best working setups in my experience (I use exactly that also on my personal Cloudron with loads of data).

                        Are you by chance using a hetzner subaccount or so? Also maybe the server might run out of memory?

                        Either way maybe send a mail to support@cloudron.io while having remote ssh enabled for us, I am sure we can sort this out.

                        BenoitB 1 Reply Last reply
                        2
                        • jdaviescoatesJ Offline
                          jdaviescoatesJ Offline
                          jdaviescoates
                          wrote last edited by
                          #12

                          Yeah, I've also been using Hetzner Storage Box via sshfs but with tar.gz for years now with zero issues.

                          I use Cloudron with Gandi & Hetzner

                          1 Reply Last reply
                          1
                          • nebulonN nebulon

                            We may have to debug this on your server then, since hetzner storage box via sshfs, rsync and hardlink is one of the best working setups in my experience (I use exactly that also on my personal Cloudron with loads of data).

                            Are you by chance using a hetzner subaccount or so? Also maybe the server might run out of memory?

                            Either way maybe send a mail to support@cloudron.io while having remote ssh enabled for us, I am sure we can sort this out.

                            BenoitB Offline
                            BenoitB Offline
                            Benoit
                            translator
                            wrote last edited by
                            #13

                            Thanks a lot for your replies. I'm letting the backups finish and I'm contacting Cloudron support in the meantime.

                            @nebulon I don't use subaccounts because I know it's risky. I have subfolders in /home/backups and no prefix.

                            Do you have a lot of these lines too ?

                            Jan 07 11:05:38 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:82:23)
                            Jan 07 11:05:38 at ChildProcess.emit (node:events:519:28)
                            Jan 07 11:05:38 at maybeClose (node:internal/child_process:1101:16)
                            Jan 07 11:05:38 at ChildProcess._handle.onexit (node:internal/child_process:304:5) {
                            Jan 07 11:05:38 reason: 'Shell Error',
                            Jan 07 11:05:38 details: {},
                            Jan 07 11:05:38 stdout: <Buffer >,
                            Jan 07 11:05:38 stdoutLineCount: 0,
                            Jan 07 11:05:38 stderr: <Buffer 2f 75 73 72 2f 62 69 6e 2f 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 66 63 31 64 33 36 33 38 2d 62 66 ... 54 more bytes>,
                            Jan 07 11:05:38 stderrLineCount: 1,
                            Jan 07 11:05:38 code: 1,
                            Jan 07 11:05:38 signal: null,
                            Jan 07 11:05:38 timedOut: false,
                            Jan 07 11:05:38 terminated: false
                            Jan 07 11:05:38 }
                            Jan 07 11:05:38 box:storage/filesystem SSH remote copy failed, trying sshfs copy
                            
                            1 Reply Last reply
                            0
                            • nebulonN Offline
                              nebulonN Offline
                              nebulon
                              Staff
                              wrote last edited by
                              #14

                              I have never seen that failing, unless something is wrong with the ssh connection itself, but we really have to decode the stderr buffer (a fix we have for the next cloudron version to make that readable) to get a better understanding why it fails. For the curious people https://git.cloudron.io/platform/box/-/commit/b89aa4488cef24b68e23fdcafdf7773d6ae9e762 would be that change.

                              1 Reply Last reply
                              2
                              • nebulonN Offline
                                nebulonN Offline
                                nebulon
                                Staff
                                wrote last edited by
                                #15

                                So to update everyone here. The root cause is, that the backup site was configure to use /prefix in the remote directory instead of the prefix. Curiously it mostly worked, despite my hetzner storage box only allowing /home as the root folder.

                                Anyways the fix is to set the prefix in the backup prefix and use /home in the remote directory. We have to see how to make this less error prone for storage boxes.

                                1 Reply Last reply
                                2
                                • BenoitB Offline
                                  BenoitB Offline
                                  Benoit
                                  translator
                                  wrote last edited by
                                  #16

                                  Many thanks for your support! I’m restarting my backups right away with the correct configuration. I’ll keep you posted on the backup speed, which should be significantly improved now that this configuration issue is fixed.

                                  1 Reply Last reply
                                  2
                                  • I Offline
                                    I Offline
                                    ikalou
                                    wrote last edited by
                                    #17

                                    Thanks for this post, I was encountering similar issues using Hetzner object storage (backup of the cloudron server was taking hours, for only 20GB of data), then I switched to Hetzner storage box with the recommendations @nebulon indicated. The backup was done in a matter of minutes 😍
                                    For some reason my initial trial failed, and when I reconfigured the backup site settings with a new storage box it worked (maybe because this time I enabled External Reachability in Hetzner).

                                    1 Reply Last reply
                                    1
                                    Reply
                                    • Reply as topic
                                    Log in to reply
                                    • Oldest to Newest
                                    • Newest to Oldest
                                    • Most Votes


                                    • Login

                                    • Don't have an account? Register

                                    • Login or register to search.
                                    • First post
                                      Last post
                                    0
                                    • Categories
                                    • Recent
                                    • Tags
                                    • Popular
                                    • Bookmarks
                                    • Search