Does mounting a backup location via SSHFS change the remote directory permissions?
-
Description
I have set up a TrueNAS Scale host. We'll call this
nas.lan, and my Cloudron hostcloudron.lan. They're both internal (10.x.y.z) addresses that my local DNS server has provided static DHCP entries for. I can ping them, etc.It seems that configuring a directory on
nas.lanvia the Cloudron Backups SSHFS option changes the directory permissions from 755 to 777, which breaksssh.- On my Cloudron host, I created an SSH keypair.
- I created a user,
cbackup, on nas.lan. - I provided the public key for the
cbackupuser tonas.lan(this is part of the GUI-driven user creation process in TrueNAS). - I
sshintocloudron.lan, and I can then use the private key I created tosshintonas.lan. This tells me the key works.- I can also do this from another host in the network, if I move the key. So, I believe the key is good. And, multiple machines can hit
nas.lanand log in as thecbackupuser with an SSH key.
- I can also do this from another host in the network, if I move the key. So, I believe the key is good. And, multiple machines can hit
- I go to
cloudron.lan, and think "this is excellent, I will now configure SSHFS for backups." It is important to note that I am excited about moving my backups to a ZFS mirrored pair of drives, served fromnas.lan, and mounted fromcloudron.lanvia SSHFS. - I go to the Backups portion of the admin, and choose "Configure" to set up my SSHFS-mounted backup location.
- I enter all of the information. It is correct.
- I get a failure for unknown reasons.
Now, here's what's cool.
- When I first create the
cbackupuser onnas.lan, I can see that the home directory has permissions 755. - When I ssh in with my key, I can see that the home directory has 755.
- If I create files, my home directory remains 755.
- If I create directories, my home directory remains 755.
- If I wait a bit, just to see what happens, for no reason, I can ssh in and my permissions are 755.
- If I restart
nas.lan, to see if something magical happens on restart, nothing magic happens, and mycbackupuser has a home directory with permissions 755.
Now, if I go to the configuration for backups on
cloudron.lan, and try and configure an SSHFS mount on the NAS, the mount fails. If I log into the NAS shell via the browser,suto root, and look at mycbackupuser's home directory... it has permissions 777.Question: Does the SSHFS mount do anything to change the permissions of the home directory on the remote system? Why, after trying to configure an SSHFS backup mount would the home directory on the remote system change from 755 to 777?
Steps to reproduce
chmod 755 /mnt/poolone/cbackup(this is $HOME)- Confirm that permissions on $HOME are 755
- SSH into the machine using the private key from
sshon the command line - Confirm 755 permissions
- Create things, do things, log out, restart
nas.lan, etc., observe a non-changing home directory with perms 755 - SSHFS setup from Cloudron
- Cannot SSH into the machine
- Sneak into the machine using the web terminal on
nas.lan, and confirm that $HOME now has perms 777
Logs
If I confirm permissions 755 and SSH in, everything is fine. Below are the logs from an attempt to mount the SSHFS backup location.
2025-11-02T20:15:26.944Z box:backups setStorage: validating new storage configuration 2025-11-02T20:15:26.944Z box:backups setupManagedStorage: setting up mount at /mnt/backup-storage-validation with sshfs 2025-11-02T20:15:26.946Z box:shell mounts /usr/bin/sudo -S /home/yellowtent/box/src/scripts/addmount.sh [Unit]\nDescription=backup-storage-validation\n\nRequires=network-online.target\nAfter=network-online.target\nBefore=docker.service\n\n\n[Mount]\nWhat=cbackup@22:/mnt/poolone/cbackup\nWhere=/mnt/backup-storage-validation\nOptions=allow_other,port=22,IdentityFile=/home/yellowtent/platformdata/sshfs/id_rsa_22,StrictHostKeyChecking=no,reconnect\nType=fuse.sshfs\n\n[Install]\nWantedBy=multi-user.target\n\n 10 2025-11-02T20:15:30.113Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive 2025-11-02T20:15:37.521Z box:shell Failed to mount 2025-11-02T20:15:37.525Z box:shell mounts: /usr/bin/sudo -S /home/yellowtent/box/src/scripts/addmount.sh [Unit]\nDescription=backup-storage-validation\n\nRequires=network-online.target\nAfter=network-online.target\nBefore=docker.service\n\n\n[Mount]\nWhat=cbackup@22:/mnt/poolone/cbackup\nWhere=/mnt/backup-storage-validation\nOptions=allow_other,port=22,IdentityFile=/home/yellowtent/platformdata/sshfs/id_rsa_22,StrictHostKeyChecking=no,reconnect\nType=fuse.sshfs\n\n[Install]\nWantedBy=multi-user.target\n\n 10 errored BoxError: mounts exited with code 3 signal null at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19) at ChildProcess.emit (node:events:519:28) at ChildProcess._handle.onexit (node:internal/child_process:294:12) { reason: 'Shell Error', details: {}, code: 3, signal: null } 2025-11-02T20:15:37.525Z box:shell mounts: mountpoint -q -- /mnt/backup-storage-validation 2025-11-02T20:15:40.090Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive 2025-11-02T20:15:42.535Z box:shell mounts: mountpoint -q -- /mnt/backup-storage-validation errored BoxError: mountpoint exited with code null signal SIGTERM at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:72:23) at ChildProcess.emit (node:events:519:28) at maybeClose (node:internal/child_process:1105:16) at ChildProcess._handle.onexit (node:internal/child_process:305:5) { reason: 'Shell Error', details: {}, stdout: <Buffer >, stdoutLineCount: 0, stderr: <Buffer >, stderrLineCount: 0, code: null, signal: 'SIGTERM' } 2025-11-02T20:15:42.536Z box:shell mounts: systemd-escape -p --suffix=mount /mnt/backup-storage-validation 2025-11-02T20:15:42.551Z box:shell mounts: journalctl -u mnt-backup\x2dstorage\x2dvalidation.mount\n -n 10 --no-pager -o json 2025-11-02T20:15:42.570Z box:shell mounts /usr/bin/sudo -S /home/yellowtent/box/src/scripts/rmmount.sh /mnt/backup-storage-validation 2025-11-02T20:15:50.084Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsiveTroubleshooting Already Performed
See above.
System Details
Generate Diagnostics Data
I'll send this if it seems warranted.
Cloudron Version
8.3.2
Ubuntu Version
No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 24.04.2 LTS Release: 24.04 Codename: nobleCloudron installation method
A long time ago. Manual.
Output of
cloudron-support --troubleshootI can cleanup my ipv6 at some point. I nuked it further up the chain, too.
Vendor: Dell Inc. Product: OptiPlex 7040 Linux: 6.8.0-86-generic Ubuntu: noble 24.04 Processor: Intel(R) Core(TM) i5-6500T CPU @ 2.50GHz BIOS Intel(R) Core(TM) i5-6500T CPU @ 2.50GHz CPU @ 2.4GHz x 4 RAM: 32729416KB Disk: /dev/nvme0n1p2 734G [OK] node version is correct [FAIL] Server has an IPv6 address but api.cloudron.io is unreachable via IPv6 (ping6 -q -c 1 api.cloudron.io) Instead of disabling IPv6 globally, you can disable it at an interface level. sysctl -w net.ipv6.conf.enp0s31f6.disable_ipv6=1 sysctl -w net.ipv6.conf.tailscale0.disable_ipv6=1 For the above configuration to persist across reboots, you have to add below to /etc/sysctl.conf net.ipv6.conf.enp0s31f6.disable_ipv6=1 net.ipv6.conf.tailscale0.disable_ipv6=1 -
After looking into this, that logic hasn't changed there between Cloudron 8 and 9. Last major change there was https://git.cloudron.io/platform/box/-/commit/6ace8d1ac50df2169b28c6a1534cb482526055cd which goes a bit into the details of the
chmod. But tbh have to do some more reading up on that bit.For a start I guess, the issue in your case would go away, if you do not mount the user's HOME but some subfolder of that. Then the 777 should not interfere with the system itself as that subfolder won't be as special as HOME.
-
This solved the problem.
(Editing later: "this" meaning "mounting a path like
$HOME/subdirsolved the problem, because the permissions on $HOME remained755, but the permissions onsubdirwere still changed to777. This is good, because $HOME has to be 755, or SSH will fail. But...)I'm still concerned that the remote directory becomes
drwxrwxrwx 3 cbackup cbackup 3 Nov 3 14:33 aloewhich seems awfully permissive. In this instance, I don't have a security threat (or, if someone gets onto the NAS, this is the least of my problems). But once I'm SSH'd into a machine via SSHFS, I'd think that
drwx------would be fine. (Put another way: once Cloudron has the private key, it should not need to set permissions on the remote directory at all... unless this is somehow related to symlinking, or whatrsyncwants to do, or...)Either way, many thanks for the good ideas. I think I'm moving forward. We'll call this one closed.
-
J james has marked this topic as solved
-
Another lesson learned. @nebulon , the SSHFS mounting code is kinda fragile, I think. This is still on 8.3.2.
In setting up a volume mount, I tried pasting in an SSH private key.
If I paste in
-----BEGIN ... ----- asdfkljasdflkjasdf alsdkfjals kdfjalskdjf asdlfjkasdlfkjasldfkj -----END ...------then things do not work. However, if I carefully reformat my key:
-----BEGIN ... ----- asdfkljasdflkjasdf alsdkfjals kdfjalskdjf asdlfjkasdlfkjasldfkj -----END ...------and paste it in, then the key works. This matters because I stored my key in a custom field in Bitwarden, and hit the "copy" button in the Bitwarden browser gui. The key came out somewhat mangled.
I would argue the whitespace was safe to split on, and could have been reformatted easily into a good key. However, I had to paste it into Cloudron exactly right, or else I got auth failures.
Maybe that is on me, but it feels like when setting up SSH mounts, splitting and formatting on whitespace is splitting and formatting on whitespace. Given that the whitespace issues are invisible to me (and Cloudron does not help me debug it... nor do the auth.log messages on the remote server), it might be nice if the GUI was a but more forgiving, or able to give me a hint.
Food for thought, anyway. I don't know if/how much of my issues have been this vs. other challenges. (I know the permissions issue is real, and repeatable. This also seems to be repeatable.)
Good luck; the v9 firehose seems real...
-
And, while I'm at it...
This came up because I had set up:
- Backups
- An SSHFS mount for NextCloud
- A separate SSHFS mount for Navidrome
All of these connections worked. I even went through multiple backup cycles.
Then, this afternoon, the mounts all failed.
I cannot determine what caused it. I was able to reset some keys, and get mounts to work. But, now, my mounts are failing again, and I suspect I'm going to find permissions/other issues. I cannot yet get to a root cause.
- I am very suspicious of Cloudron's SSHFS mount code. Given that it seems to make aggressive permission changes, I'm worried. That said,
- It could be something about TrueNAS Scale. That said, it is "just" a Debian. On the other hand, I've never worked with ZFS or TrueNAS. So... is there something going on, where permissions are shifting?
What bothers me is that I can, from both my Cloudron host and my local machine, use the SSH keys in question without difficulty. So, I am not inclined to believe that TrueNAS is doing something odd, given that the standard SSH from a Linux command line can connect, but Cloudron fails to make mounts. Something is breaking, and I don't know if I have the right logs/tools to debug what is going on in Box.
Happy to do what I can to help.
-
And...
Reading
https://superuser.com/questions/1477472/openssh-public-key-file-format
and digging in to some of the RFCs a bit deeper, it seems like this is a complex, largely unspecified space.
It might be good if Cloudron:
- Was clear about what format it could ingest, and
- Considered accepting a file upload for the private key
as opposed to dealing with copy-paste. But, either way... being clear about what was expected from us for the key (at least as far as Cloudron is concerned) would be good.