Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Why does Cloudron set 777 permissions for SSHFS?

Why does Cloudron set 777 permissions for SSHFS?

Scheduled Pinned Locked Moved Solved Support
backupsshfssecurity
6 Posts 2 Posters 17 Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic was forked from Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron james
This topic has been deleted. Only users with topic management privileges can see it.
  • jadudmJ Offline
    jadudmJ Offline
    jadudm
    wrote last edited by jadudm
    #1

    I'm struggling with this problem as well.

    I'm finding when I try and SSHFS with my TrueNAS box...

    1. Assuming the user is cloudback
    2. The path is /home/pool/dataset/cloudback
    3. I set my backup path to /home/pool/dataset/cloudback and my prefix to full

    Cloudron always changes the permissions on the directory /home/pool/dataset/cloudback to 777. This seems... grossly insecure. And, worse, it breaks SSH, because you can't have a filesystem above the .ssh directory with permissions that open.

    However, I also find that if I set the path deeper into the account (with no prefix), I avoid the permissions issue, and instead, I get backups that hang/lock, especially on Immich. (That could be unrelated.)

    My single biggest question is why is Cloudron setting perms to 777 anywhere?

    I'm trying again by creating a directory in the homedir, and using that as my base path. Then, within that, I'm using the "path" option to create subfolders. I don't have a reason I think this might help, but given comments above, I'm trying it. 🤷

    I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

    1 Reply Last reply
    1
    • jamesJ Offline
      jamesJ Offline
      james
      Staff
      wrote last edited by james
      #2

      Hello @jadudm

      @jadudm said in Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron:

      Cloudron always changes the permissions on the directory /home/pool/dataset/cloudback to 777. This seems... grossly insecure.

      This behaviour is comming from SSHFS itself since Cloudron has to set the SSHFS option allow_other and this will set the path e.g. /mnt/cloudronbackup to 777.
      From the SSHFS manual - https://man7.org/linux/man-pages/man1/sshfs.1.html :

      By default, only the mounting user will be able to access the filesystem. Access for other users can be enabled by passing -o allow_other.

      In your example without allow_other only the user cloudback would be able to access the mount.

      @jadudm said in Extremely slow backups to Hetzner Storage Box (rsync & tar.gz) – replacing MinIO used on a dedicated Cloudron:

      And, worse, it breaks SSH, because you can't have a filesystem above the .ssh directory with permissions that open.

      Is that really the case?
      I am running a Cloudron server with a Hetzner Storage Box as the backup provider with SSHFS.
      In the home there is .ssh/authorized_keys which gives access to the whole Storage Box.
      There also is a sub-folder named storage_volume01 which I use for a volume mount with SSHFS which also has a .ssh folder, but with a Hetzner Storage Box Sub-Account.
      This is working without any issues.

      Do you mean this breaks ssh on the target system?

      1 Reply Last reply
      1
      • jadudmJ Offline
        jadudmJ Offline
        jadudm
        wrote last edited by
        #3

        Hi @james ,

        Fair enough. To be clear:

        If you make the mistake of using $HOME for the target directory, then the behavior of allow_other changes the permissions on $HOME to 777. The .ssh directory must exist under a home directory that is either 755, 751, or 750. (It probably can be something else...) Point being, "fool me twice," I have made this mistake on more than one system, and wondered why it is so hard to set up an SSHFS mountpoint. It is because it works once, and then not a second time, because the home directory permissions have changed, "breaking" SSH on the target system.

        Perhaps this is clearly described in the SSH mount docs, and I missed it, but it is a silent/invisible source of confusion when setting up SSHFS mountpoints.

        (An aside: I still don't know why any user other than the user I assign would need to access the mountpoint: I provide a private key and a username. Only that user should be able to carry out the SSHFS mount, and all of the writes should happen as that user. Why would I ever need some other user to be able to read my backups on a remote system?)

        We can re-close this as solved, because I more clearly understand Cloudron's behavior. Because two things can be true, I understand the behavior, and I still think it is incorrect: if I provide a private key and username, that is the user I expect all operations to happen as, and I do not expect permissions to be set so that any user of the remote system can read the files. But, expectations are tantamount to assumptions. 😃

        I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

        jamesJ 1 Reply Last reply
        0
        • jadudmJ jadudm

          Hi @james ,

          Fair enough. To be clear:

          If you make the mistake of using $HOME for the target directory, then the behavior of allow_other changes the permissions on $HOME to 777. The .ssh directory must exist under a home directory that is either 755, 751, or 750. (It probably can be something else...) Point being, "fool me twice," I have made this mistake on more than one system, and wondered why it is so hard to set up an SSHFS mountpoint. It is because it works once, and then not a second time, because the home directory permissions have changed, "breaking" SSH on the target system.

          Perhaps this is clearly described in the SSH mount docs, and I missed it, but it is a silent/invisible source of confusion when setting up SSHFS mountpoints.

          (An aside: I still don't know why any user other than the user I assign would need to access the mountpoint: I provide a private key and a username. Only that user should be able to carry out the SSHFS mount, and all of the writes should happen as that user. Why would I ever need some other user to be able to read my backups on a remote system?)

          We can re-close this as solved, because I more clearly understand Cloudron's behavior. Because two things can be true, I understand the behavior, and I still think it is incorrect: if I provide a private key and username, that is the user I expect all operations to happen as, and I do not expect permissions to be set so that any user of the remote system can read the files. But, expectations are tantamount to assumptions. 😃

          jamesJ Offline
          jamesJ Offline
          james
          Staff
          wrote last edited by
          #4

          Hello @jadudm

          @jadudm said in Why does Cloudron set 777 permissions for SSHFS?:

          If you make the mistake of using $HOME for the target directory

          Indeed, this is never a good idea.

          @jadudm said in Why does Cloudron set 777 permissions for SSHFS?:

          (An aside: I still don't know why any user other than the user I assign would need to access the mountpoint: I provide a private key and a username. Only that user should be able to carry out the SSHFS mount, and all of the writes should happen as that user. Why would I ever need some other user to be able to read my backups on a remote system?)

          I think you misunderstand the issue. The set permission is not for your target system, but the mounting system.
          This is not a Cloudron behaviour, it is a SSHFS behaviour.
          You could test this yourself.
          You can SSHFS as your regular user e.g.: cloudback without the allow_other option, then switch to the root user, try to access the SSHFS mount, and you will get a permission denied, even as the root user.

          On my.cloudron.dev I have created the folder /root/sshfs-test.
          On my local machine I run the following commands:

          sudo mkdir /mnt/sshfs
          sudo chown -R $USER:$USER /mnt/sshfs
          sshfs -p 22 root@my.cloudron.dev:/root/sshfs-test /mnt/sshfs
          ls -lah /mnt/sshfs
          total 8.0K
          drwxr-xr-x  1 root root 4.0K Feb  9 13:34 .
          drwxr-xr-x 12 root root 4.0K Feb  9 13:30 ..
          # you can see, the mounted folder has user and group set to `root` but I am locally user `james` and I am able to access it because `james` has mounted it with SSHFS
          echo 'Cats and Dogs' > /mnt/sshfs/animals.txt
          cat /mnt/sshfs/animals.txt
          Cats and Dogs
          # Switching to user root on my local machine
          sudo su -
          ls -lah /mnt/sshfs
          ls: cannot access '/mnt/sshfs': Permission denied
          

          Since Cloudron does mount SSHFS with a mount service e.g.: systemctl status mnt-cloudronbackup.mount the mount is done by user root.
          Thus, without allow_other only user root on the Cloudron server is allowed to access the mount.
          The user yellowtent who is creating backups needs access to the mount as well, so backups can be placed directly into the mount.

          A workaround would be creating the backups as user yellowtent, placing them locally in a temporary folder and then moving the backups as user root to the mount.
          But this can lead to all sort of issues.
          Just one example would be the storage issue.
          A Nextcloud app that used 300GB of disk space on a system with only 500GB total capacity would run the system out of space when attempting the above-mentioned approach.

          I hope this made it a bit more understandable.

          1 Reply Last reply
          1
          • jadudmJ Offline
            jadudmJ Offline
            jadudm
            wrote last edited by jadudm
            #5

            Ah. I see.

            My apologies. I am very used to being the same user on both the host and the target system. And, I'm thinking in terms of scp or sftp, not an SSHFS mount. The difference matters a great deal; your answer is clear, and I see why I was confused/wrong.

            My fog of confusion wafts away in the light of illumination. 🙏 Thank you.

            I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

            1 Reply Last reply
            1
            • jamesJ Offline
              jamesJ Offline
              james
              Staff
              wrote last edited by
              #6

              Hello @jadudm
              Always happy to help.

              1 Reply Last reply
              0
              • jamesJ james has marked this topic as solved
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Don't have an account? Register

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • Bookmarks
              • Search