Does mounting a backup location via SSHFS change the remote directory permissions?
-
Description
I have set up a TrueNAS Scale host. We'll call this
nas.lan, and my Cloudron hostcloudron.lan. They're both internal (10.x.y.z) addresses that my local DNS server has provided static DHCP entries for. I can ping them, etc.It seems that configuring a directory on
nas.lanvia the Cloudron Backups SSHFS option changes the directory permissions from 755 to 777, which breaksssh.- On my Cloudron host, I created an SSH keypair.
- I created a user,
cbackup, on nas.lan. - I provided the public key for the
cbackupuser tonas.lan(this is part of the GUI-driven user creation process in TrueNAS). - I
sshintocloudron.lan, and I can then use the private key I created tosshintonas.lan. This tells me the key works.- I can also do this from another host in the network, if I move the key. So, I believe the key is good. And, multiple machines can hit
nas.lanand log in as thecbackupuser with an SSH key.
- I can also do this from another host in the network, if I move the key. So, I believe the key is good. And, multiple machines can hit
- I go to
cloudron.lan, and think "this is excellent, I will now configure SSHFS for backups." It is important to note that I am excited about moving my backups to a ZFS mirrored pair of drives, served fromnas.lan, and mounted fromcloudron.lanvia SSHFS. - I go to the Backups portion of the admin, and choose "Configure" to set up my SSHFS-mounted backup location.
- I enter all of the information. It is correct.
- I get a failure for unknown reasons.
Now, here's what's cool.
- When I first create the
cbackupuser onnas.lan, I can see that the home directory has permissions 755. - When I ssh in with my key, I can see that the home directory has 755.
- If I create files, my home directory remains 755.
- If I create directories, my home directory remains 755.
- If I wait a bit, just to see what happens, for no reason, I can ssh in and my permissions are 755.
- If I restart
nas.lan, to see if something magical happens on restart, nothing magic happens, and mycbackupuser has a home directory with permissions 755.
Now, if I go to the configuration for backups on
cloudron.lan, and try and configure an SSHFS mount on the NAS, the mount fails. If I log into the NAS shell via the browser,suto root, and look at mycbackupuser's home directory... it has permissions 777.Question: Does the SSHFS mount do anything to change the permissions of the home directory on the remote system? Why, after trying to configure an SSHFS backup mount would the home directory on the remote system change from 755 to 777?
Steps to reproduce
chmod 755 /mnt/poolone/cbackup(this is $HOME)- Confirm that permissions on $HOME are 755
- SSH into the machine using the private key from
sshon the command line - Confirm 755 permissions
- Create things, do things, log out, restart
nas.lan, etc., observe a non-changing home directory with perms 755 - SSHFS setup from Cloudron
- Cannot SSH into the machine
- Sneak into the machine using the web terminal on
nas.lan, and confirm that $HOME now has perms 777
Logs
If I confirm permissions 755 and SSH in, everything is fine. Below are the logs from an attempt to mount the SSHFS backup location.
2025-11-02T20:15:26.944Z box:backups setStorage: validating new storage configuration 2025-11-02T20:15:26.944Z box:backups setupManagedStorage: setting up mount at /mnt/backup-storage-validation with sshfs 2025-11-02T20:15:26.946Z box:shell mounts /usr/bin/sudo -S /home/yellowtent/box/src/scripts/addmount.sh [Unit]\nDescription=backup-storage-validation\n\nRequires=network-online.target\nAfter=network-online.target\nBefore=docker.service\n\n\n[Mount]\nWhat=cbackup@22:/mnt/poolone/cbackup\nWhere=/mnt/backup-storage-validation\nOptions=allow_other,port=22,IdentityFile=/home/yellowtent/platformdata/sshfs/id_rsa_22,StrictHostKeyChecking=no,reconnect\nType=fuse.sshfs\n\n[Install]\nWantedBy=multi-user.target\n\n 10 2025-11-02T20:15:30.113Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive 2025-11-02T20:15:37.521Z box:shell Failed to mount 2025-11-02T20:15:37.525Z box:shell mounts: /usr/bin/sudo -S /home/yellowtent/box/src/scripts/addmount.sh [Unit]\nDescription=backup-storage-validation\n\nRequires=network-online.target\nAfter=network-online.target\nBefore=docker.service\n\n\n[Mount]\nWhat=cbackup@22:/mnt/poolone/cbackup\nWhere=/mnt/backup-storage-validation\nOptions=allow_other,port=22,IdentityFile=/home/yellowtent/platformdata/sshfs/id_rsa_22,StrictHostKeyChecking=no,reconnect\nType=fuse.sshfs\n\n[Install]\nWantedBy=multi-user.target\n\n 10 errored BoxError: mounts exited with code 3 signal null at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19) at ChildProcess.emit (node:events:519:28) at ChildProcess._handle.onexit (node:internal/child_process:294:12) { reason: 'Shell Error', details: {}, code: 3, signal: null } 2025-11-02T20:15:37.525Z box:shell mounts: mountpoint -q -- /mnt/backup-storage-validation 2025-11-02T20:15:40.090Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive 2025-11-02T20:15:42.535Z box:shell mounts: mountpoint -q -- /mnt/backup-storage-validation errored BoxError: mountpoint exited with code null signal SIGTERM at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:72:23) at ChildProcess.emit (node:events:519:28) at maybeClose (node:internal/child_process:1105:16) at ChildProcess._handle.onexit (node:internal/child_process:305:5) { reason: 'Shell Error', details: {}, stdout: <Buffer >, stdoutLineCount: 0, stderr: <Buffer >, stderrLineCount: 0, code: null, signal: 'SIGTERM' } 2025-11-02T20:15:42.536Z box:shell mounts: systemd-escape -p --suffix=mount /mnt/backup-storage-validation 2025-11-02T20:15:42.551Z box:shell mounts: journalctl -u mnt-backup\x2dstorage\x2dvalidation.mount\n -n 10 --no-pager -o json 2025-11-02T20:15:42.570Z box:shell mounts /usr/bin/sudo -S /home/yellowtent/box/src/scripts/rmmount.sh /mnt/backup-storage-validation 2025-11-02T20:15:50.084Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsiveTroubleshooting Already Performed
See above.
System Details
Generate Diagnostics Data
I'll send this if it seems warranted.
Cloudron Version
8.3.2
Ubuntu Version
No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 24.04.2 LTS Release: 24.04 Codename: nobleCloudron installation method
A long time ago. Manual.
Output of
cloudron-support --troubleshootI can cleanup my ipv6 at some point. I nuked it further up the chain, too.
Vendor: Dell Inc. Product: OptiPlex 7040 Linux: 6.8.0-86-generic Ubuntu: noble 24.04 Processor: Intel(R) Core(TM) i5-6500T CPU @ 2.50GHz BIOS Intel(R) Core(TM) i5-6500T CPU @ 2.50GHz CPU @ 2.4GHz x 4 RAM: 32729416KB Disk: /dev/nvme0n1p2 734G [OK] node version is correct [FAIL] Server has an IPv6 address but api.cloudron.io is unreachable via IPv6 (ping6 -q -c 1 api.cloudron.io) Instead of disabling IPv6 globally, you can disable it at an interface level. sysctl -w net.ipv6.conf.enp0s31f6.disable_ipv6=1 sysctl -w net.ipv6.conf.tailscale0.disable_ipv6=1 For the above configuration to persist across reboots, you have to add below to /etc/sysctl.conf net.ipv6.conf.enp0s31f6.disable_ipv6=1 net.ipv6.conf.tailscale0.disable_ipv6=1 -
After looking into this, that logic hasn't changed there between Cloudron 8 and 9. Last major change there was https://git.cloudron.io/platform/box/-/commit/6ace8d1ac50df2169b28c6a1534cb482526055cd which goes a bit into the details of the
chmod. But tbh have to do some more reading up on that bit.For a start I guess, the issue in your case would go away, if you do not mount the user's HOME but some subfolder of that. Then the 777 should not interfere with the system itself as that subfolder won't be as special as HOME.