Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. SSH remote copy always failed, falling back to sshfs copy

SSH remote copy always failed, falling back to sshfs copy

Scheduled Pinned Locked Moved Solved Support
backuprestorezfssshfs
17 Posts 5 Posters 616 Views 6 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D Offline
    D Offline
    Dummyzam
    wrote last edited by joseph
    #1

    I'm trying to solve an issue from some times with setting up a sshfs backup using rsync to my nas running TrueNas.
    Each time, SSH remote copy fail and, as expected, it fallback to sshfs copy... which take ages.
    I've red @jadudm recents posts (and @james , both very useful, thxs !) which seems to be very close to my issue, with an close setup.
    I've tried everything i could base on it but i'm probably missing something and i keep having this issue :

    • initial upload (or diff upload after first backup) is done without issues (in the "snapshot" folder if i understood correctly)
    • but when it try to do a remote copy it always fail so, the backup process upload every thing again...

    Here is a log extract example (192.168.0.10 being my TrueNas ip) :

    2026-04-01T02:01:21.275Z shell: filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_dba91bb5-5fb9-46b8-91cc-69d69437aaea -p 22 myuser@192.168.0.10 cp -aRl snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 2026-04-01-020004-530/app_next.mydomain.com_v5.7.0
    (node:210013) [DEP0190] DeprecationWarning: Passing args to a child process with shell option true can lead to security vulnerabilities, as the arguments are not escaped, only concatenated.
    (Use `node --trace-deprecation ...` to show where the warning was created)
    2026-04-01T02:01:21.582Z shell: filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_dba91bb5-5fb9-46b8-91cc-69d69437aaea -p 22 myuser@192.168.0.10 cp -aRl snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 2026-04-01-020004-530/app_next.mydomain.com_v5.7.0 errored BoxError: ssh exited with code 1 signal null
        at ChildProcess.<anonymous> (file:///home/yellowtent/box/src/shell.js:70:23)
        at ChildProcess.emit (node:events:508:28)
        at maybeClose (node:internal/child_process:1101:16)
        at ChildProcess._handle.onexit (node:internal/child_process:305:5) {
      reason: 'Shell Error',
      details: {},
      stdout: <Buffer >,
      stdoutString: '',
      stdoutLineCount: 0,
      stderr: <Buffer 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 36 64 61 38 35 63 39 61 2d 66 62 66 30 2d 34 35 36 33 2d 62 ... 45 more bytes>,
      stderrString: "cp: cannot stat 'snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467': No such file or directory\n",
      stderrLineCount: 1,
      code: 1,
      signal: null,
      timedOut: false,
      terminated: false
    }
    2026-04-01T02:01:21.585Z storage/filesystem: SSH remote copy failed, trying sshfs copy
    2026-04-01T02:01:21.585Z shell: filesystem: cp -aRl /mnt/managedbackups/99e49fad-652d-4a9b-ae14-2200e68188d8/snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 /mnt/managedbackups/99e49fad-652d-4a9b-ae14-2200e68188d8/2026-04-01-020004-530/app_next.mydomain.com_v5.7.0
    2026-04-01T07:06:43.896Z backuptask: copy: copied successfully to 2026-04-01-020004-530/app_next.mydomain.com_v5.7.0. Took 18322.66 seconds
    

    I've tried to setup the TrueNas share in multiple ways : with/out prefix ; directly on a (sub)dataset ; in a $HOME subfolder (i never used the $HOME only 😉 ) ; with/out 777 on the all folder/dataset dedicated to it....
    I've made sure that it is not a performance bottleneck (on both side and in connection speed/bandwidth).
    Every time it fail at the same point.

    From what i understand of my reading, i guess it is a permission-related issue, but i have no idea in which way...
    Any suggestion will be welcome 🙂 .
    Thanks in advance!

    Additional technical details :

    Cloudron version 9.1.5 running on Ubuntu 22.04.5 LTS ; hosted on "mydomain.com" vps 6GB ram 4 cores
    Backup setup : sshfs + rsync + file encryption + hardlink enable ; file name encryption disable ; 
    Data to backup : simple cloudron instance with "only" Nextcloud running ; 70 GB  125366 files
    Backup target : TrueNas Scale 25.04 ; dedicated user ("myuser") ; dedicated dataset or directory. 
    

    Ps : 98% percent of my integrity check fail, even on my primary target (Hetzner storage box) which don't have upper issue . But one issue after the other 🙂

    1 Reply Last reply
    1
    • R Offline
      R Offline
      RianKellyIT
      wrote last edited by
      #2

      To be fair, the SSH remote copy failing is almost certainly a ZFS hardlink limitation rather than a permissions issue. Cloudron's backup system uses "cp -aRl" on the remote host to create a hardlink copy of the snapshot directory into a dated backup directory. This is the fast path that avoids re-uploading data between backup runs.

      The problem with TrueNAS is that ZFS hardlinks only work within the same dataset. If your snapshot directory and your dated backup directories are on different ZFS datasets, or if the copy crosses a dataset boundary at any point, cp -aRl will fail silently (or with a "cross-device link" error) and Cloudron falls back to sshfs and uploads everything again.

      A few things to check:

      First, look at your TrueNAS dataset layout. If your Cloudron backup target is something like /mnt/pool/cloudron and the backup creates subdirectories inside it, everything needs to be within the same ZFS dataset. Run "zfs list" on TrueNAS and confirm that snapshot and the dated directories are under the same mount point.

      Second, run the cp command manually to see the actual error. SSH in to TrueNAS as your backup user and try:
      cp -aRl /path/to/snapshot /path/to/test-copy

      If you see "cp: cannot create hard link: Invalid cross-device link" or similar, the dataset boundary is your problem.

      Third, if you cannot fix the dataset layout, you can work around it by enabling Zstandard compression on the dataset and accepting the sshfs path, or by restructuring your TrueNAS datasets so the entire backup target is a single flat dataset with no child datasets underneath.

      What does the log show right after the cp command fails? There should be an error message before it falls back to sshfs.

      1 Reply Last reply
      2
      • D Offline
        D Offline
        Dummyzam
        wrote last edited by
        #3

        Thanks for your great response!
        I will check the the 3 points right now and report back.
        But my TrueNas setup is very simple (it's my first TN install 🙂 ). So i have only one zpol with several datasets in it. I don't use sub-dataset. So in almost all of my previous tests, the backup target was a dedicated dataset or a sub-directory of a dataset, with no cross-dataset reference (i didn't know it was possible 🙂 ).
        I did only one test with a sub-dataset dedicated to backup as target.
        Every time with the same issue...
        Is there a point i'm missing about your explanation ?
        (and does the silent failure can explain the "cp: cannot stat No such file or directory" error in logs ?)

        1 Reply Last reply
        0
        • D Offline
          D Offline
          Dummyzam
          wrote last edited by Dummyzam
          #4

          Here the results :

          1. zfs list a lot but here are the line corresponding to backup target (each used for different test, never at the same time)
          admin@truenas[~]$ sudo zfs list
          NAME                                                       USED  AVAIL  REFER  MOUNTPOINT
          OnlyZpol                                                    649G  2.88T  6.30G  /mnt/OnlyZpol
          
          OnlyZpol/CloudronBackup                                     256G  2.88T   256G  /mnt/OnlyZpol/CloudronBackup
          [...]
          OnlyZpol/UserTst                                            133G  2.88T  66.4G  /mnt/OnlyZpol/UserTst
          OnlyZpol/UserTst/CloudronBck                               66.4G  2.88T  66.4G  /mnt/OnlyZpol/UserTst/CloudronBck
          [...]
          boot-pool                                                 28.4G  68.5G    96K  none
          
          1. cp -aRl /path/to/snapshot /path/to/test-copy
            I did the test on both backup path i have, works and is quick.
          myuser@truenas:~$ cp -aRl /mnt/OnlyZpol/UserTst/CloudronBck/snapshot/ /mnt/OnlyZpol/UserTst/CloudronBck/testdir/
          myuser@truenas:~$ 
          myuser@truenas:~$ cp -aRl /mnt/OnlyZpol/CloudronBackup/snapshot/ /mnt/OnlyZpol/CloudronBackup/testdir/
          myuser@truenas:~$ 
          
          1. Current compression is LZ4. I haven't change it to ZSTD yet as previous points. Moreover i'm not sure to understand the "accepting the sshfs path" part.

          So, with my limited knowledge, i'm unsure my issue come from cross-dataset references.

          But i will test again without hardlink.
          EDIT : as expected, same issue, remote copy fail....

          1 Reply Last reply
          0
          • J Offline
            J Offline
            joseph
            Staff
            wrote last edited by
            #5

            Have you tried running ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_dba91bb5-5fb9-46b8-91cc-69d69437aaea -p 22 myuser@192.168.0.10 cp -aRl snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 temp ?

            1 Reply Last reply
            1
            • D Offline
              D Offline
              Dummyzam
              wrote last edited by
              #6

              Thanks for pointing this because no, i didn't.
              But now yes, and i have the same issue.
              /tmp/identity_file... was not found but i was able to log with password (i need to remove that).
              I did the test with an absolute path and it woks.
              I'm not sure how i should interpret all of it... Is this an issue similar to this topic ? Have i set-up sshfs target incorrectly causing the relative path to fail ? Is this another connection/permission issue as i saw on another post i can't find ?

              1 Reply Last reply
              0
              • jadudmJ Offline
                jadudmJ Offline
                jadudm
                wrote last edited by
                #7

                Is Cloudron using a correct/full path when issuing the cp over ssh? I've had this problem, too, and I know I'm not spanning ZFS pools. If we enter a root path into the config, should't Cloudron use the full paths, for correctness/completeness/clarity, when issuing the cp?

                I don't think that helps much, but I've seen the same persistent issues that @dummyzam is pointing to.

                I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

                1 Reply Last reply
                1
                • J Offline
                  J Offline
                  joseph
                  Staff
                  wrote last edited by
                  #8

                  yeah, it's always absolute paths.

                  1 Reply Last reply
                  0
                  • D Offline
                    D Offline
                    Dummyzam
                    wrote last edited by
                    #9

                    thanks @joseph for your answer.
                    But the command you gave me (same one as in error log) use relative path.
                    And when i try the same command manually with an absolute path, it seems to works.
                    What should i understand of it ?

                    J 1 Reply Last reply
                    3
                    • D Dummyzam

                      thanks @joseph for your answer.
                      But the command you gave me (same one as in error log) use relative path.
                      And when i try the same command manually with an absolute path, it seems to works.
                      What should i understand of it ?

                      J Offline
                      J Offline
                      joseph
                      Staff
                      wrote last edited by
                      #10

                      @Dummyzam you are right. For sshfs, the paths are relative (to the user's home). The cp command is run after ssh into that server.

                      I did the test with an absolute path and it woks.

                      What did you mean by this? I think we have to figure out why ssh -p 22 myuser@192.168.0.10 cp -aRl snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 temp does not work. Is that because the ssh user has some other home directory? Maybe your prefix (in the backup site config) is an absolute path and not a relative path?

                      D 1 Reply Last reply
                      0
                      • J joseph

                        @Dummyzam you are right. For sshfs, the paths are relative (to the user's home). The cp command is run after ssh into that server.

                        I did the test with an absolute path and it woks.

                        What did you mean by this? I think we have to figure out why ssh -p 22 myuser@192.168.0.10 cp -aRl snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 temp does not work. Is that because the ssh user has some other home directory? Maybe your prefix (in the backup site config) is an absolute path and not a relative path?

                        D Offline
                        D Offline
                        Dummyzam
                        wrote last edited by
                        #11

                        @joseph said:

                        he paths are relative (to the user's home)

                        Is it relative to user $home or to target directory ?
                        Because if it is to the target directory, it should have been '../CloudronBackup/snapshot' in my case and not 'snapshot'

                        Because

                        @joseph said:
                        What did you mean by this?

                        My user $home is : /mnt/OnlyZpol/UserTst/
                        My target directory is : /mnt/OnlyZpol/CloudronBackup/

                        Running :

                        ssh -o "StrictHostKeyChecking no" -p 22 myuser@192.168.0.10 cp -aRl snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 temp

                        -> FAIL

                        ssh -o "StrictHostKeyChecking no" -p 22 myuser@192.168.0.10 cp -aRl /mnt/OnlyZpol/CloudronBackup/snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 /mnt/OnlyZpol/CloudronBackup/temp

                        -> Success

                        Is that because the ssh user has some other home directory? Maybe your prefix (in the backup site config) is an absolute path and not a relative path?

                        So yes for the first and no for the second, i never set an "absolute prefix".
                        I will try to use a subdirectory of user's home, but i've already tried quite similar with a sub-dataset ; without luck.

                        IMO it whould be safer to give absolute path in sshfs as you can't really ensure ssh will login each time in same directory

                        J 1 Reply Last reply
                        0
                        • D Dummyzam

                          @joseph said:

                          he paths are relative (to the user's home)

                          Is it relative to user $home or to target directory ?
                          Because if it is to the target directory, it should have been '../CloudronBackup/snapshot' in my case and not 'snapshot'

                          Because

                          @joseph said:
                          What did you mean by this?

                          My user $home is : /mnt/OnlyZpol/UserTst/
                          My target directory is : /mnt/OnlyZpol/CloudronBackup/

                          Running :

                          ssh -o "StrictHostKeyChecking no" -p 22 myuser@192.168.0.10 cp -aRl snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 temp

                          -> FAIL

                          ssh -o "StrictHostKeyChecking no" -p 22 myuser@192.168.0.10 cp -aRl /mnt/OnlyZpol/CloudronBackup/snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 /mnt/OnlyZpol/CloudronBackup/temp

                          -> Success

                          Is that because the ssh user has some other home directory? Maybe your prefix (in the backup site config) is an absolute path and not a relative path?

                          So yes for the first and no for the second, i never set an "absolute prefix".
                          I will try to use a subdirectory of user's home, but i've already tried quite similar with a sub-dataset ; without luck.

                          IMO it whould be safer to give absolute path in sshfs as you can't really ensure ssh will login each time in same directory

                          J Offline
                          J Offline
                          joseph
                          Staff
                          wrote last edited by
                          #12

                          @Dummyzam I think you can set the prefix to /mnt/OnlyZpol/CloudronBackup . That will make cp use absolute paths.

                          D 1 Reply Last reply
                          0
                          • J joseph

                            @Dummyzam I think you can set the prefix to /mnt/OnlyZpol/CloudronBackup . That will make cp use absolute paths.

                            D Offline
                            D Offline
                            Dummyzam
                            wrote last edited by
                            #13

                            @joseph thanks for the interresting idea but Cloudron does not allow it.
                            On back up setup it shows this message : "prefix must be a relative path" 😞

                            1 Reply Last reply
                            0
                            • jadudmJ Offline
                              jadudmJ Offline
                              jadudm
                              wrote last edited by
                              #14

                              I'm not sure where in the Box codebase this is, but the SSH backup behavior is strange. (Or, not documented sufficiently clearly for me to make sense of it.) I spent a bunch of time trying to figure this out as well, and ultimately gave up. However, @dummyzam is encountering many of the same kinds of confusion I did.

                              Ideally:

                              1. The remote directory should be the base for all operations, as far as backup site configuration is concerned.
                              2. All operations should be absolute paths, and be rooted at join(remote dir, prefix). No backup operations should take place outside of that root path on the remote system.

                              If those things are true, then it should be the case that given a target/remote directory of (say) /mnt/OnlyZpol/CloudronBackup/ and a prefix of backups, then all operations will be against /mnt/OnlyZpol/CloudronBackup/backups/*.

                              If $HOME is used by Box when doing SSHFS backups, then that should be documented somewhere.

                              As I learned, the target/remote directory will be set to 777, which can be a problem if the user you're authenticating as lacks permissions, or if you make the mistake of using $HOME as your remote directory (as this can upset the permissions that SSHD expects for .ssh).

                              I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

                              1 Reply Last reply
                              1
                              • nebulonN Offline
                                nebulonN Offline
                                nebulon
                                Staff
                                wrote last edited by
                                #15

                                So for sshfs mostly we just do a local sshfs (fuse) mount point and then the backup strategy runs exactly the same way as for all other backup targets. So that code is not specific. However if you use rsync with hardlink support, then sshfs really can shine and for that we have added specific code paths to do the remote ssh login and create the hardlinks on the server directly, which is the most common operation for incremental backups. This speeds up the backup process a lot.

                                I am writing this, since above it seems to be about that optimized code path, which is doing the ssh login first and then runs the cp on the remote host (not your Cloudron).

                                1 Reply Last reply
                                1
                                • nebulonN Offline
                                  nebulonN Offline
                                  nebulon
                                  Staff
                                  wrote last edited by
                                  #16

                                  To conclude here, the faster remote copy path when using sshfs with rsync assumes that the backup folder is actually within that SSH user's HOME folder. This is causing the problems here. Mostly it was designed to work with hetzner storage boxes, as being a popular choice, where that assumption holds true. This will require fixes on the platform side and might take some time to get released, so the current workaround would be to use an SSH user where the HOME is set in a way that the target folder is within that folder. This is not ideal, but at least would get you the much faster backups for the moment.

                                  1 Reply Last reply
                                  1
                                  • nebulonN Offline
                                    nebulonN Offline
                                    nebulon
                                    Staff
                                    wrote last edited by
                                    #17

                                    This is now fixed for the next release with https://git.cloudron.io/platform/box/-/commit/a4ea80cf5eb26b08462430111e8b6f75175e749d

                                    1 Reply Last reply
                                    1
                                    • nebulonN nebulon has marked this topic as solved

                                    Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                                    Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                                    With your input, this post could be even better 💗

                                    Register Login
                                    Reply
                                    • Reply as topic
                                    Log in to reply
                                    • Oldest to Newest
                                    • Newest to Oldest
                                    • Most Votes


                                    • Login

                                    • Don't have an account? Register

                                    • Login or register to search.
                                    • First post
                                      Last post
                                    0
                                    • Categories
                                    • Recent
                                    • Tags
                                    • Popular
                                    • Bookmarks
                                    • Search