Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron
-
So yes the speed will degrade quite a bit if the remote copy from snapshot to final folder fails. So the question then is why
/usr/bin/cp: cannot stat 'snapshot/app_<APP_ID>'happens. Since this happens after the download (and assuming this is tgz format), maybe the file did not yet appear in the storage box there? Not sure how this can be the case, unless the snapshot upload has failed before.But maybe can you try sshfs with tgz again and if it fails check on the storage itself if the file exists?
-
Generally though, I would very much recommend hetzner storage box with rsync and hardlinks...that means the initial upload is slower but afterwards only the difference is essentially uploaded.
-
@nebulon I just stopped the backup at 55%! I checked the hardlinks box. I have a VM backup scheduled in an hour, so I've rescheduled the storage box backup for tonight; we'll see tomorrow if it goes through. I'll keep you posted.
I've attempted multiple backups, but the results are still terrible. In my opinion, the storage box isn't production-ready for Cloudron. Even the delta sync with rsync takes forever. Case in point: a backup has been running for 6 hours and is only at 57% of 77GB. I'm moving away from this solution to look for alternatives.
In all my logs i find these lines which is the point when it takes an infinity time...
Jan 07 09:48:36 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:82:23) Jan 07 09:48:36 at ChildProcess.emit (node:events:519:28) Jan 07 09:48:36 at maybeClose (node:internal/child_process:1101:16) Jan 07 09:48:36 at ChildProcess._handle.onexit (node:internal/child_process:304:5) { Jan 07 09:48:36 reason: 'Shell Error', Jan 07 09:48:36 details: {}, Jan 07 09:48:36 stdout: <Buffer >, Jan 07 09:48:36 stdoutLineCount: 0, Jan 07 09:48:36 stderr: <Buffer 2f 75 73 72 2f 62 69 6e 2f 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 6e 75 6d 73 6f 6c 2d 2f 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 37 34 30 ... 62 more bytes>, Jan 07 09:48:36 stderrLineCount: 1, Jan 07 09:48:36 code: 1, Jan 07 09:48:36 signal: null, Jan 07 09:48:36 timedOut: false, Jan 07 09:48:36 terminated: false Jan 07 09:48:36 } Jan 07 09:48:36 box:storage/filesystem SSH remote copy failed, trying sshfs copy -
We may have to debug this on your server then, since hetzner storage box via sshfs, rsync and hardlink is one of the best working setups in my experience (I use exactly that also on my personal Cloudron with loads of data).
Are you by chance using a hetzner subaccount or so? Also maybe the server might run out of memory?
Either way maybe send a mail to support@cloudron.io while having remote ssh enabled for us, I am sure we can sort this out.
-
Yeah, I've also been using Hetzner Storage Box via sshfs but with tar.gz for years now with zero issues.
-
We may have to debug this on your server then, since hetzner storage box via sshfs, rsync and hardlink is one of the best working setups in my experience (I use exactly that also on my personal Cloudron with loads of data).
Are you by chance using a hetzner subaccount or so? Also maybe the server might run out of memory?
Either way maybe send a mail to support@cloudron.io while having remote ssh enabled for us, I am sure we can sort this out.
Thanks a lot for your replies. I'm letting the backups finish and I'm contacting Cloudron support in the meantime.
@nebulon I don't use subaccounts because I know it's risky. I have subfolders in /home/backups and no prefix.
Do you have a lot of these lines too ?
Jan 07 11:05:38 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:82:23) Jan 07 11:05:38 at ChildProcess.emit (node:events:519:28) Jan 07 11:05:38 at maybeClose (node:internal/child_process:1101:16) Jan 07 11:05:38 at ChildProcess._handle.onexit (node:internal/child_process:304:5) { Jan 07 11:05:38 reason: 'Shell Error', Jan 07 11:05:38 details: {}, Jan 07 11:05:38 stdout: <Buffer >, Jan 07 11:05:38 stdoutLineCount: 0, Jan 07 11:05:38 stderr: <Buffer 2f 75 73 72 2f 62 69 6e 2f 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 66 63 31 64 33 36 33 38 2d 62 66 ... 54 more bytes>, Jan 07 11:05:38 stderrLineCount: 1, Jan 07 11:05:38 code: 1, Jan 07 11:05:38 signal: null, Jan 07 11:05:38 timedOut: false, Jan 07 11:05:38 terminated: false Jan 07 11:05:38 } Jan 07 11:05:38 box:storage/filesystem SSH remote copy failed, trying sshfs copy -
I have never seen that failing, unless something is wrong with the ssh connection itself, but we really have to decode the stderr buffer (a fix we have for the next cloudron version to make that readable) to get a better understanding why it fails. For the curious people https://git.cloudron.io/platform/box/-/commit/b89aa4488cef24b68e23fdcafdf7773d6ae9e762 would be that change.
-
So to update everyone here. The root cause is, that the backup site was configure to use
/prefixin the remote directory instead of the prefix. Curiously it mostly worked, despite my hetzner storage box only allowing/homeas the root folder.Anyways the fix is to set the prefix in the backup prefix and use /home in the remote directory. We have to see how to make this less error prone for storage boxes.
-
Many thanks for your support! I’m restarting my backups right away with the correct configuration. I’ll keep you posted on the backup speed, which should be significantly improved now that this configuration issue is fixed.
-
Thanks for this post, I was encountering similar issues using Hetzner object storage (backup of the cloudron server was taking hours, for only 20GB of data), then I switched to Hetzner storage box with the recommendations @nebulon indicated. The backup was done in a matter of minutes

For some reason my initial trial failed, and when I reconfigured the backup site settings with a new storage box it worked (maybe because this time I enabled External Reachability in Hetzner).