Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. SSH remote copy always failed, falling back to sshfs copy

SSH remote copy always failed, falling back to sshfs copy

Scheduled Pinned Locked Moved Unsolved Support
backuprestore
4 Posts 2 Posters 37 Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D Offline
    D Offline
    Dummyzam
    wrote last edited by
    #1

    I'm trying to solve an issue from some times with setting up a sshfs backup using rsync to my nas running TrueNas.
    Each time, SSH remote copy fail and, as expected, it fallback to sshfs copy... which take ages.
    I've red @jadudm recents posts (and @james , both very useful, thxs !) which seems to be very close to my issue, with an close setup.
    I've tried everything i could base on it but i'm probably missing something and i keep having this issue :

    • initial upload (or diff upload after first backup) is done without issues (in the "snapshot" folder if i understood correctly)
    • but when it try to do a remote copy it always fail so, the backup process upload every thing again...

    Here is a log extract example (192.168.0.10 being my TrueNas ip) :

    2026-04-01T02:01:21.275Z shell: filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_dba91bb5-5fb9-46b8-91cc-69d69437aaea -p 22 myuser@192.168.0.10 cp -aRl snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 2026-04-01-020004-530/app_next.mydomain.com_v5.7.0
    (node:210013) [DEP0190] DeprecationWarning: Passing args to a child process with shell option true can lead to security vulnerabilities, as the arguments are not escaped, only concatenated.
    (Use `node --trace-deprecation ...` to show where the warning was created)
    2026-04-01T02:01:21.582Z shell: filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_dba91bb5-5fb9-46b8-91cc-69d69437aaea -p 22 myuser@192.168.0.10 cp -aRl snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 2026-04-01-020004-530/app_next.mydomain.com_v5.7.0 errored BoxError: ssh exited with code 1 signal null
        at ChildProcess.<anonymous> (file:///home/yellowtent/box/src/shell.js:70:23)
        at ChildProcess.emit (node:events:508:28)
        at maybeClose (node:internal/child_process:1101:16)
        at ChildProcess._handle.onexit (node:internal/child_process:305:5) {
      reason: 'Shell Error',
      details: {},
      stdout: <Buffer >,
      stdoutString: '',
      stdoutLineCount: 0,
      stderr: <Buffer 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 36 64 61 38 35 63 39 61 2d 66 62 66 30 2d 34 35 36 33 2d 62 ... 45 more bytes>,
      stderrString: "cp: cannot stat 'snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467': No such file or directory\n",
      stderrLineCount: 1,
      code: 1,
      signal: null,
      timedOut: false,
      terminated: false
    }
    2026-04-01T02:01:21.585Z storage/filesystem: SSH remote copy failed, trying sshfs copy
    2026-04-01T02:01:21.585Z shell: filesystem: cp -aRl /mnt/managedbackups/99e49fad-652d-4a9b-ae14-2200e68188d8/snapshot/app_6da85c9a-fbf0-4563-baaa-1ad3080cf467 /mnt/managedbackups/99e49fad-652d-4a9b-ae14-2200e68188d8/2026-04-01-020004-530/app_next.mydomain.com_v5.7.0
    2026-04-01T07:06:43.896Z backuptask: copy: copied successfully to 2026-04-01-020004-530/app_next.mydomain.com_v5.7.0. Took 18322.66 seconds
    

    I've tried to setup the TrueNas share in multiple ways : with/out prefix ; directly on a (sub)dataset ; in a $HOME subfolder (i never used the $HOME only 😉 ) ; with/out 777 on the all folder/dataset dedicated to it....
    I've made sure that it is not a performance bottleneck (on both side and in connection speed/bandwidth).
    Every time it fail at the same point.

    From what i understand of my reading, i guess it is a permission-related issue, but i have no idea in which way...
    Any suggestion will be welcome 🙂 .
    Thanks in advance!

    Additional technical details :

    Cloudron version 9.1.5 running on Ubuntu 22.04.5 LTS ; hosted on "mydomain.com" vps 6GB ram 4 cores
    Backup setup : sshfs + rsync + file encryption + hardlink enable ; file name encryption disable ; 
    Data to backup : simple cloudron instance with "only" Nextcloud running ; 70 GB  125366 files
    Backup target : TrueNas Scale 25.04 ; dedicated user ("myuser") ; dedicated dataset or directory. 
    

    Ps : 98% percent of my integrity check fail, even on my primary target (Hetzner storage box) which don't have upper issue . But one issue after the other 🙂

    1 Reply Last reply
    0
    • R Offline
      R Offline
      RianKellyIT
      wrote last edited by
      #2

      To be fair, the SSH remote copy failing is almost certainly a ZFS hardlink limitation rather than a permissions issue. Cloudron's backup system uses "cp -aRl" on the remote host to create a hardlink copy of the snapshot directory into a dated backup directory. This is the fast path that avoids re-uploading data between backup runs.

      The problem with TrueNAS is that ZFS hardlinks only work within the same dataset. If your snapshot directory and your dated backup directories are on different ZFS datasets, or if the copy crosses a dataset boundary at any point, cp -aRl will fail silently (or with a "cross-device link" error) and Cloudron falls back to sshfs and uploads everything again.

      A few things to check:

      First, look at your TrueNAS dataset layout. If your Cloudron backup target is something like /mnt/pool/cloudron and the backup creates subdirectories inside it, everything needs to be within the same ZFS dataset. Run "zfs list" on TrueNAS and confirm that snapshot and the dated directories are under the same mount point.

      Second, run the cp command manually to see the actual error. SSH in to TrueNAS as your backup user and try:
      cp -aRl /path/to/snapshot /path/to/test-copy

      If you see "cp: cannot create hard link: Invalid cross-device link" or similar, the dataset boundary is your problem.

      Third, if you cannot fix the dataset layout, you can work around it by enabling Zstandard compression on the dataset and accepting the sshfs path, or by restructuring your TrueNAS datasets so the entire backup target is a single flat dataset with no child datasets underneath.

      What does the log show right after the cp command fails? There should be an error message before it falls back to sshfs.

      1 Reply Last reply
      2
      • D Offline
        D Offline
        Dummyzam
        wrote last edited by
        #3

        Thanks for your great response!
        I will check the the 3 points right now and report back.
        But my TrueNas setup is very simple (it's my first TN install 🙂 ). So i have only one zpol with several datasets in it. I don't use sub-dataset. So in almost all of my previous tests, the backup target was a dedicated dataset or a sub-directory of a dataset, with no cross-dataset reference (i didn't know it was possible 🙂 ).
        I did only one test with a sub-dataset dedicated to backup as target.
        Every time with the same issue...
        Is there a point i'm missing about your explanation ?
        (and does the silent failure can explain the "cp: cannot stat No such file or directory" error in logs ?)

        1 Reply Last reply
        0
        • D Offline
          D Offline
          Dummyzam
          wrote last edited by Dummyzam
          #4

          Here the results :

          1. zfs list a lot but here are the line corresponding to backup target (each used for different test, never at the same time)
          admin@truenas[~]$ sudo zfs list
          NAME                                                       USED  AVAIL  REFER  MOUNTPOINT
          OnlyZpol                                                    649G  2.88T  6.30G  /mnt/OnlyZpol
          
          OnlyZpol/CloudronBackup                                     256G  2.88T   256G  /mnt/OnlyZpol/CloudronBackup
          [...]
          OnlyZpol/UserTst                                            133G  2.88T  66.4G  /mnt/OnlyZpol/UserTst
          OnlyZpol/UserTst/CloudronBck                               66.4G  2.88T  66.4G  /mnt/OnlyZpol/UserTst/CloudronBck
          [...]
          boot-pool                                                 28.4G  68.5G    96K  none
          
          1. cp -aRl /path/to/snapshot /path/to/test-copy
            I did the test on both backup path i have, works and is quick.
          myuser@truenas:~$ cp -aRl /mnt/OnlyZpol/UserTst/CloudronBck/snapshot/ /mnt/OnlyZpol/UserTst/CloudronBck/testdir/
          myuser@truenas:~$ 
          myuser@truenas:~$ cp -aRl /mnt/OnlyZpol/CloudronBackup/snapshot/ /mnt/OnlyZpol/CloudronBackup/testdir/
          myuser@truenas:~$ 
          
          1. Current compression is LZ4. I haven't change it to ZSTD yet as previous points. Moreover i'm not sure to understand the "accepting the sshfs path" part.

          So, with my limited knowledge, i'm unsure my issue come from cross-dataset references.

          But i will test again without hardlink.
          EDIT : as expected, same issue, remote copy fail....

          1 Reply Last reply
          0

          Hello! It looks like you're interested in this conversation, but you don't have an account yet.

          Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

          With your input, this post could be even better 💗

          Register Login
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Don't have an account? Register

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • Bookmarks
          • Search